code
stringlengths
2.5k
150k
kind
stringclasses
1 value
vagrant Docker Configuration Docker Configuration ===================== The Docker provider has some provider-specific configuration options you may set. A complete reference is shown below. ### Required One of the following settings is required when using the Docker provider: * [`build_dir`](#build_dir) (string) - The path to a directory containing a Dockerfile. * [`image`](#image) (string) - The image to launch, specified by the image ID or a name such as `ubuntu:12.04`. * [`git_repo`](#git_repo) (string) - The URL of a git repository to build the image from. Supports pulling specific tags, branches and revision, consult the [docker documenation](https://docs.docker.com/engine/reference/commandline/build/#/git-repositories) for more information. ### Optional General settings: * [`build_args`](#build_args) (array of strings) - Extra arguments to pass to `docker build` when `build_dir` is in use. * [`cmd`](#cmd) (array of strings) - Custom command to run on the container. Example: `["ls", "/app"]`. * [`compose`](#compose) (boolean) - If true, Vagrant will use `docker-compose` to manage the lifecycle and configuration of containers. This defaults to false. * [`compose_configuration`](#compose_configuration) (Hash) - Configuration values used for populating the `docker-compose.yml` file. The value of this Hash is directly merged and written to the `docker-compose.yml` file allowing customization of non-services items like networks and volumes. * [`create_args`](#create_args) (array of strings) - Additional arguments to pass to `docker run` when the container is started. This can be used to set parameters that are not exposed via the Vagrantfile. * [`dockerfile`](#dockerfile) (string) - Name of the Dockerfile in the build directory. This defaults to "Dockerfile" * [`env`](#env) (hash) - Environmental variables to expose into the container. * [`expose`](#expose) (array of integers) - Ports to expose from the container but not to the host machine. Useful for links. * [`link`](#link) (method, string argument) - Link this container to another by name. The argument should be in the format of `(name:alias)`. Example: `docker.link("db:db")`. Note, if you are linking to another container in the same Vagrantfile, make sure you call `vagrant up` with the `--no-parallel` flag. * [`force_host_vm`](#force_host_vm) (boolean) - If true, then a host VM will be spun up even if the computer running Vagrant supports Linux containers. This is useful to enforce a consistent environment to run Docker. This value defaults to "false" on Linux, Mac, and Windows hosts and defaults to "true" on other hosts. Users on other hosts who choose to use a different Docker provider or opt-in to the native Docker builds can explicitly set this value to false to disable the behavior. * [`has_ssh`](#has_ssh) (boolean) - If true, then Vagrant will support SSH with the container. This allows `vagrant ssh` to work, provisioners, etc. This defaults to false. * [`host_vm_build_dir_options`](#host_vm_build_dir_options) (hash) - Synced folder options for the `build_dir`, since the build directory is synced using a synced folder if a host VM is in use. * [`name`](#name) (string) - Name of the container. Note that this has to be unique across all containers on the host VM. By default Vagrant will generate some random name. * [`pull`](#pull) (bool) - If true, the image will be pulled on every `up` and `reload`. Defaults to false. * [`ports`](#ports) (array of strings) - Ports to expose from the container to the host. These should be in the format of `host:container`. * [`remains_running`](#remains_running) (boolean) - If true, Vagrant expects this container to remain running and will make sure that it does for a certain amount of time. If false, then Vagrant expects that this container will automatically stop at some point, and will not error if it sees it do that. * [`stop_timeout`](#stop_timeout) (integer) - The amount of time to wait when stopping a container before sending a SIGTERM to the process. * [`vagrant_machine`](#vagrant_machine) (string) - The name of the Vagrant machine in the `vagrant_vagrantfile` to use as the host machine. This defaults to "default". * [`vagrant_vagrantfile`](#vagrant_vagrantfile) (string) - Path to a Vagrantfile that contains the `vagrant_machine` to use as the host VM if needed. * [`volumes`](#volumes) (array of strings) - List of directories to mount as volumes into the container. These directories must exist in the host where Docker is running. If you want to sync folders from the host Vagrant is running, just use synced folders. Below, we have settings related to auth. If these are set, then Vagrant will `docker login` prior to starting containers, allowing you to pull images from private repositories. * [`email`](#email) (string) - Email address for logging in. * [`username`](#username) (string) - Username for logging in. * [`password`](#password) (string) - Password for logging in. * [`auth_server`](#auth_server) (string) - The server to use for authentication. If not set, the Docker Hub will be used. vagrant Docker Commands Docker Commands ================ The Docker provider exposes some additional Vagrant commands that are useful for interacting with Docker containers. This helps with your workflow on top of Vagrant so that you have full access to Docker underneath. ### docker-exec `vagrant docker-exec` can be used to run one-off commands against a Docker container that is currently running. If the container is not running, an error will be returned. ``` $ vagrant docker-exec app -- rake db:migrate ``` The above would run `rake db:migrate` in the context of an `app` container. Note that the "name" corresponds to the name of the VM, **not** the name of the Docker container. Consider the following Vagrantfile: ``` Vagrant.configure(2) do |config| config.vm.provider "docker" do |d| d.image = "consul" end end ``` This Vagrantfile will start the official Docker Consul image. However, the associated Vagrant command to `docker-exec` into this instance is: ``` $ vagrant docker-exec -it -- /bin/sh ``` In particular, the command is actually: ``` $ vagrant docker-exec default -it -- /bin/sh ``` Because "default" is the default name of the first defined VM. In a multi-machine Vagrant setup as shown below, the "name" attribute corresponds to the name of the VM, **not** the name of the container: ``` Vagrant.configure do |config| config.vm.define "web" do config.vm.provider "docker" do |d| d.image = "nginx" end end config.vm.define "consul" do config.vm.provider "docker" do |d| d.image = "consul" end end end ``` The following command is invalid: ``` # Not valid $ vagrant docker-exec -it nginx -- /bin/sh ``` This is because the "name" of the VM is "web", so the command is actually: ``` $ vagrant docker-exec -it web -- /bin/sh ``` For this reason, it is recommended that you name the VM the same as the container. In the above example, it is unambiguous that the command to enter the Consul container is: ``` $ vagrant docker-exec -it consul -- /bin/sh ``` ### docker-logs `vagrant docker-logs` can be used to see the logs of a running container. Because most Docker containers are single-process, this is used to see the logs of that one process. Additionally, the logs can be tailed. ### docker-run `vagrant docker-run` can be used to run one-off commands against a Docker container. The one-off Docker container that is started shares all the volumes, links, etc. of the original Docker container. An example is shown below: ``` $ vagrant docker-run app -- rake db:migrate ``` The above would run `rake db:migrate` in the context of an `app` container. vagrant Plugin Development: Guest Capabilities Plugin Development: Guest Capabilities ======================================= This page documents how to add new capabilities for <guests> to Vagrant, allowing Vagrant to perform new actions on specific guest operating systems. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Guest capabilities augment <guests> by attaching specific "capabilities" to the guest, which are actions that can be performed in the context of that guest operating system. The power of capabilities is that plugins can add new capabilities to existing guest operating systems without modifying the core of Vagrant. In earlier versions of Vagrant, all the guest logic was contained in the core of Vagrant and was not easily augmented. Definition Component --------------------- Within the context of a plugin definition, guest capabilities can be defined like so: ``` guest_capability "ubuntu", "my_custom_capability" do require_relative "cap/my_custom_capability" Cap::MyCustomCapability end ``` Guest capabilities are defined by calling the `guest_capability` method, which takes two parameters: the guest to add the capability to, and the name of the capability itself. Then, the block argument returns a class that implements a method named the same as the capability. This is covered in more detail in the next section. Implementation --------------- Implementations should be classes or modules that have a method with the same name as the capability. The method must be immediately accessible on the class returned from the `guest_capability` component, meaning that if it is an instance method, an instance should be returned. In general, class methods are used for capabilities. For example, here is the implementation for the capability above: ``` module Cap class MyCustomCapability def self.my_custom_capability(machine) # implementation end end end ``` All capabilities get the Vagrant machine object as the first argument. Additional arguments are determined by the specific capability, so view the documentation or usage of the capability you are trying to implement for more information. Some capabilities must also return values back to the caller, so be aware of that when implementing a capability. Capabilities always have access to communication channels such as SSH on the machine, and the machine can generally be assumed to be booted. Calling Capabilities --------------------- Since you have access to the machine in every capability, capabilities can also call *other* capabilities. This is useful for using the inheritance mechanism of capabilities to potentially ask helpers for more information. For example, the "redhat" guest has a "network\_scripts\_dir" capability that simply returns the directory where networking scripts go. Capabilities on child guests of RedHat such as CentOS or Fedora use this capability to determine where networking scripts go, while sometimes overriding it themselves. Capabilities can be called like so: ``` machine.guest.capability(:capability_name) ``` Any additional arguments given to the method will be passed on to the capability, and the capability will return the value that the actual capability returned. vagrant Plugin Development: Guests Plugin Development: Guests =========================== This page documents how to add new guest OS detection to Vagrant, allowing Vagrant to properly configure new operating systems. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Vagrant has many features that requires doing guest OS-specific actions, such as mounting folders, configuring networks, etc. These tasks vary from operating system to operating system. If you find that one of these does not work for your operating system, then maybe the guest implementation is incomplete or incorrect. Definition Component --------------------- Within the context of a plugin definition, new guests can be defined like so: ``` guest "ubuntu" do require_relative "guest" Guest end ``` Guests are defined with the `guest` method. The first argument is the name of the guest. This name is not actually used anywhere, but may in the future, so choose something helpful. Then, the block argument returns a class that implements the `Vagrant.plugin(2, :guest)` interface. Implementation --------------- Implementations of guests subclass `Vagrant.plugin("2", "guest")`. Within this implementation, only the `detect?` method needs to be implemented. The `detect?` method is called by Vagrant at some point after the machine is booted in order to determine what operating system the guest is running. If you detect that it is your operating system, return `true` from `detect?`. Otherwise, return `false`. Communication channels to the machine are guaranteed to be running at this point, so the most common way to detect the operating system is to do some basic testing: ``` class MyGuest < Vagrant.plugin("2", "guest") def detect?(machine) machine.communicate.test("cat /etc/myos-release") end end ``` After detecting an OS, that OS is used for various [guest capabilities](guest-capabilities) that may be required. Guest Inheritance ------------------ Vagrant also supports a form of inheritance for guests, since sometimes operating systems stem from a common root. A good example of this is Linux is the root of Debian, which further is the root of Ubuntu in many cases. Inheritance allows guests to share a lot of common behavior while allowing distro-specific overrides. Inheritance is not done via standard Ruby class inheritance because Vagrant uses a custom [capability-based](guest-capabilities) system. Vagrant handles inheritance dispatch for you. To subclass another guest, specify that guest's name as a second parameter in the guest definition: ``` guest "ubuntu", "debian" do require_relative "guest" Guest end ``` With the above component, the "ubuntu" guest inherits from "debian." When a capability is looked up for "ubuntu", all capabilities from "debian" are also available, and any capabilities in "ubuntu" override parent capabilities. When detecting operating systems with `detect?`, Vagrant always does a depth-first search by searching the children operating systems before checking their parents. Therefore, it is guaranteed in the above example that the `detect?` method on "ubuntu" will be called before "debian." vagrant Plugin Development: Hosts Plugin Development: Hosts ========================== This page documents how to add new host OS detection to Vagrant, allowing Vagrant to properly execute host-specific operations on new operating systems. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Vagrant has some features that require host OS-specific actions, such as exporting NFS folders. These tasks vary from operating system to operating system. Vagrant uses host detection as well as [host capabilities](host-capabilities) to perform these host OS-specific operations. Definition Component --------------------- Within the context of a plugin definition, new hosts can be defined like so: ``` host "ubuntu" do require_relative "host" Host end ``` Hosts are defined with the `host` method. The first argument is the name of the host. This name is not actually used anywhere, but may in the future, so choose something helpful. Then, the block argument returns a class that implements the `Vagrant.plugin(2, :host)` interface. Implementation --------------- Implementations of hosts subclass `Vagrant.plugin("2", "host")`. Within this implementation, only the `detect?` method needs to be implemented. The `detect?` method is called by Vagrant very early on in its initialization process to determine if the OS that Vagrant is running on is this host. If you detect that it is your operating system, return `true` from `detect?`. Otherwise, return `false`. ``` class MyHost < Vagrant.plugin("2", "host") def detect?(environment) File.file?("/etc/arch-release") end end ``` After detecting an OS, that OS is used for various [host capabilities](host-capabilities) that may be required. Host Inheritance ----------------- Vagrant also supports a form of inheritance for hosts, since sometimes operating systems stem from a common root. A good example of this is Linux is the root of Debian, which further is the root of Ubuntu in many cases. Inheritance allows hosts to share a lot of common behavior while allowing distro-specific overrides. Inheritance is not done via standard Ruby class inheritance because Vagrant uses a custom [capability-based](host-capabilities) system. Vagrant handles inheritance dispatch for you. To subclass another host, specify that host's name as a second parameter in the host definition: ``` host "ubuntu", "debian" do require_relative "host" Host end ``` With the above component, the "ubuntu" host inherits from "debian." When a capability is looked up for "ubuntu", all capabilities from "debian" are also available, and any capabilities in "ubuntu" override parent capabilities. When detecting operating systems with `detect?`, Vagrant always does a depth-first search by searching the children operating systems before checking their parents. Therefore, it is guaranteed in the above example that the `detect?` method on "ubuntu" will be called before "debian." vagrant Plugin Development: Provisioners Plugin Development: Provisioners ================================= This page documents how to add new [provisioners](../provisioning/index) to Vagrant, allowing Vagrant to automatically install software and configure software using a custom provisioner. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Definition Component --------------------- Within the context of a plugin definition, new provisioners can be defined like so: ``` provisioner "custom" do require_relative "provisioner" Provisioner end ``` Provisioners are defined with the `provisioner` method, which takes a single argument specifying the name of the provisioner. This is the name that used with `config.vm.provision` when configuring and enabling the provisioner. So in the case above, the provisioner would be enabled using `config.vm.provision :custom`. The block argument then lazily loads and returns a class that implements the `Vagrant.plugin(2, :provisioner)` interface, which is covered next. Provisioner Class ------------------ The provisioner class should subclass and implement `Vagrant.plugin(2, :provisioner)` which is an upgrade-safe way to let Vagrant return the proper parent class for provisioners. This class and the methods that need to be implemented are [very well documented](https://github.com/hashicorp/vagrant/blob/master/lib/vagrant/plugin/v2/provisioner.rb). The documentation on the class in the comments should be enough to understand what needs to be done. There are two main methods that need to be implemented: the `configure` method and the `provision` method. The `configure` method is called early in the machine booting process to allow the provisioner to define new configuration on the machine, such as sharing folders, defining networks, etc. As an example, the [Chef solo provisioner](https://github.com/hashicorp/vagrant/blob/master/plugins/provisioners/chef/provisioner/chef_solo.rb#L24) uses this to define shared folders. The `provision` method is called when the machine is booted and ready for SSH connections. In this method, the provisioner should execute any commands that need to be executed.
programming_docs
vagrant Plugins Plugins ======== Vagrant comes with many great features out of the box to get your environments up and running. Sometimes, however, you want to change the way Vagrant does something or add additional functionality to Vagrant. This can be done via Vagrant *plugins*. Plugins are powerful, first-class citizens that extend Vagrant using a well-documented, stable API that can withstand major version upgrades. In fact, most of the core of Vagrant is [implemented using plugins](https://github.com/hashicorp/vagrant/tree/master/plugins). Since Vagrant [dogfoods](https://en.wikipedia.org/wiki/Eating_your_own_dog_food) its own plugin API, you can be confident that the interface is stable and well supported. Use the navigation on the left below the "Plugins" section to learn more about how to use and build your own plugins. vagrant Plugin Development: Host Capabilities Plugin Development: Host Capabilities ====================================== This page documents how to add new capabilities for <hosts> to Vagrant, allowing Vagrant to perform new actions on specific host operating systems. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Host capabilities augment <hosts> by attaching specific "capabilities" to the host, which are actions that can be performed in the context of that host operating system. The power of capabilities is that plugins can add new capabilities to existing host operating systems without modifying the core of Vagrant. In earlier versions of Vagrant, all the host logic was contained in the core of Vagrant and was not easily augmented. Definition and Implementation ------------------------------ The definition and implementation of host capabilities is identical to [guest capabilities](guest-capabilities). The main difference from guest capabilities, however, is that instead of taking a machine as the first argument, all host capabilities take an instance of `Vagrant::Environment` as their first argument. Access to the environment allows host capabilities to access global state, specific machines, and also allows them to call other host capabilities. Calling Capabilities --------------------- Since you have access to the environment in every capability, capabilities can also call *other* host capabilities. This is useful for using the inheritance mechanism of capabilities to potentially ask helpers for more information. For example, the "linux" guest has a "nfs\_check\_command" capability that returns the command to use to check if NFS is running. Capabilities on child guests of Linux such as RedHat or Arch use this capability to mostly inherit the Linux behavior, except for this minor detail. Capabilities can be called like so: ``` environment.host.capability(:capability_name) ``` Any additional arguments given to the method will be passed on to the capability, and the capability will return the value that the actual capability returned. vagrant Plugin Development: Configuration Plugin Development: Configuration ================================== This page documents how to add new configuration options to Vagrant, settable with `config.YOURKEY` in Vagrantfiles. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Definition Component --------------------- Within the context of a plugin definition, new configuration keys can be defined like so: ``` config "foo" do require_relative "config" Config end ``` Configuration keys are defined with the `config` method, which takes as an argument the name of the configuration variable as the argument. This means that the configuration object will be accessible via `config.foo` in Vagrantfiles. Then, the block argument returns a class that implements the `Vagrant.plugin(2, :config)` interface. Implementation --------------- Implementations of configuration keys should subclass `Vagrant.plugin(2, :config)`, which is a Vagrant method that will return the proper subclass for a version 2 configuration section. The implementation is very simple, and acts mostly as a plain Ruby object. Here is an example: ``` class Config < Vagrant.plugin(2, :config) attr_accessor :widgets def initialize @widgets = UNSET_VALUE end def finalize! @widgets = 0 if @widgets == UNSET_VALUE end end ``` When using this configuration class, it looks like the following: ``` Vagrant.configure("2") do |config| # ... config.foo.widgets = 12 end ``` Easy. The only odd thing is the `UNSET_VALUE` bits above. This is actually so that Vagrant can properly automatically merge multiple configurations. Merging is covered in the next section, and `UNSET_VALUE` will be explained there. Merging -------- Vagrant works by loading [multiple Vagrantfiles and merging them](../vagrantfile/index#load-order). This merge logic is built-in to configuration classes. When merging two configuration objects, we will call them "old" and "new", it'll by default take all the instance variables defined on "new" that are not `UNSET_VALUE` and set them onto the merged result. The reason `UNSET_VALUE` is used instead of Ruby's `nil` is because it is possible that you want the default to be some value, and the user actually wants to set the value to `nil`, and it is impossible for Vagrant to automatically determine whether the user set the instance variable, or if it was defaulted as nil. This merge logic is what you want almost every time. Hence, in the example above, `@widgets` is set to `UNSET_VALUE`. If we had two Vagrant configuration objects in the same file, then Vagrant would properly merge the follows. The example below shows this: ``` Vagrant.configure("2") do |config| config.widgets = 1 end Vagrant.configure("2") do |config| # ... other stuff end Vagrant.configure("2") do |config| config.widgets = 2 end ``` If this were placed in a Vagrantfile, after merging, the value of widgets would be "2". The `finalize!` method is called only once ever on the final configuration object in order to set defaults. If `finalize!` is called, that configuration will never be merged again, it is final. This lets you detect any `UNSET_VALUE` and set the proper default, as we do in the above example. Of course, sometimes you want custom merge logic. Let us say we wanted our widgets to be additive. We can override the `merge` method to do this: ``` class Config < Vagrant.config("2", :config) attr_accessor :widgets def initialize @widgets = 0 end def merge(other) super.tap do |result| result.widgets = @widgets + other.widgets end end end ``` In this case, we did not use `UNSET_VALUE` for widgets because we did not need that behavior. We default to 0 and always merge by summing the two widgets. Now, if we ran the example above that had the 3 configuration blocks, the final value of widgets would be "3". Validation ----------- Configuration classes are also responsible for validating their own values. Vagrant will call the `validate` method to do this. An example validation method is shown below: ``` class Config < Vagrant.plugin("2", :config) # ... def validate(machine) errors = _detected_errors if @widgets <= 5 errors << "widgets must be greater than 5" end { "foo" => errors } end end ``` The validation method is given a `machine` object, since validation is done for each machine that Vagrant is managing. This allows you to conditionally validate some keys based on the state of the machine and so on. The `_detected_errors` method returns any errors already detected by Vagrant, such as unknown configuration keys. This returns an array of error messages, so be sure to turn it into the proper Hash object to return later. The return value is a Ruby Hash object, where the key is a section name, and the value is a list of error messages. These will be displayed by Vagrant. The hash must not contain any values if there are no errors. Accessing ---------- After all the configuration options are merged and finalized, you will likely want to access the finalized value in your plugin. The initializer function varies with each type of plugin, but *most* plugins expose an initializer like this: ``` def initialize(machine, config) @machine = machine @config = config end ``` When authoring a plugin, simply call `super` in your initialize function to setup these instance variables: ``` def initialize(*) super @config.is_now_available # ...existing code end def my_helper @config.is_here_too end ``` For examples, take a look at Vagrant's own internal plugins in the `plugins` folder in Vagrant's source on GitHub. vagrant Plugin Development: Packaging & Distribution Plugin Development: Packaging & Distribution ============================================= This page documents how to organize the file structure of your plugin and distribute it so that it is installable using [standard installation methods](usage). Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Example Plugin --------------- The best way to describe packaging and distribution is to look at how another plugin does it. The best example plugin available for this is [vagrant-aws](https://github.com/mitchellh/vagrant-aws). By using [Bundler](http://bundler.io) and Rake, building a new vagrant-aws package is easy. By simply calling `rake package`, a `gem` file is dropped into the directory. By calling `rake release`, the gem is built and it is uploaded to the central [RubyGems](https://rubygems.org) repository so that it can be installed using `vagrant plugin install`. Your plugin can and should be this easy, too, since you basically get this for free by using Bundler. Setting Up Your Project ------------------------ To setup your project, run `bundle gem vagrant-my-plugin`. This will create a `vagrant-my-plugin` directory that has the initial layout to be a RubyGem. You should modify the `vagrant-my-plugin.gemspec` file to add any dependencies and change any metadata. View the [vagrant-aws.gemspec](https://github.com/mitchellh/vagrant-aws/blob/master/vagrant-aws.gemspec) for a good example. > > **Do not depend on Vagrant** for your gem. Vagrant is no longer distributed as a gem, and you can assume that it will always be available when your plugin is installed. > > Once the directory structure for a RubyGem is setup, you will want to modify your Gemfile. Here is the basic structure of a Gemfile for Vagrant plugin development: ``` source "https://rubygems.org" group :development do gem "vagrant", git: "https://github.com/hashicorp/vagrant.git" end group :plugins do gem "my-vagrant-plugin", path: "." end ``` This Gemfile gets "vagrant" for development. This allows you to `bundle exec vagrant` to run Vagrant with your plugin already loaded, so that you can test it manually that way. The only thing about this Gemfile that may stand out as odd is the "plugins" group and putting your plugin in that group. Because `vagrant plugin` commands do not work in development, this is how you "install" your plugin into Vagrant. Vagrant will automatically load any gems listed in the "plugins" group. Note that this also allows you to add multiple plugins to Vagrant for development, if your plugin works with another plugin. Next, create a `Rakefile` that has at the very least, the following contents: ``` require "rubygems" require "bundler/setup" Bundler::GemHelper.install_tasks ``` If you run `rake -T` now, which lists all the available rake tasks, you should see that you have the `package` and `release` tasks. You can now develop your plugin and build it! You can view the [vagrant-aws Rakefile](https://github.com/mitchellh/vagrant-aws/blob/master/Rakefile) for a more comprehensive example that includes testing. Testing Your Plugin -------------------- To manually test your plugin during development, use `bundle exec vagrant` to execute Vagrant with your plugin loaded (thanks to the Gemfile setup we did earlier). For automated testing, the [vagrant-spec](https://github.com/hashicorp/vagrant-spec) project provides helpers for both unit and acceptance testing plugins. See the giant README for that project for a detailed description of how to integrate vagrant-spec into your project. Vagrant itself (and all of its core plugins) use vagrant-spec for automated testing. vagrant Plugin Development: Commands Plugin Development: Commands ============================= This page documents how to add new commands to Vagrant, invocable via `vagrant YOUR-COMMAND`. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Definition Component --------------------- Within the context of a plugin definition, new commands can be defined like so: ``` command "foo" do require_relative "command" Command end ``` Commands are defined with the `command` method, which takes as an argument the name of the command, in this case "foo." This means the command will be invocable via `vagrant foo`. Then the block argument returns a class that implements the `Vagrant.plugin(2, "command")` interface. You can also define *non-primary commands*. These commands do not show up in the `vagrant -h` output. They only show up if the user explicitly does a `vagrant list-commands` which shows the full listing of available commands. This is useful for highly specific commands or plugins that a beginner to Vagrant would not be using anyways. Vagrant itself uses non-primary commands to expose some internal functions, as well. To define a non-primary command: ``` command("foo", primary: false) do require_relative "command" Command end ``` Implementation --------------- Implementations of commands should subclass `Vagrant.plugin(2, :command)`, which is a Vagrant method that will return the proper superclass for a version 2 command. The implementation itself is quite simple, since the class needs to only implement a single method: `execute`. Example: ``` class Command < Vagrant.plugin(2, :command) def execute puts "Hello!" 0 end end ``` The `execute` method is called when the command is invoked, and it should return the exit status (0 for success, anything else for error). This is a command at its simplest form. Of course, the command superclass gives you access to the Vagrant environment and provides some helpers to do common tasks such as command line parsing. Parsing Command-Line Options ----------------------------- The `parse_options` method is available which will parse the command line for you. It takes an [OptionParser](http://ruby-doc.org/stdlib-1.9.3/libdoc/optparse/rdoc/OptionParser.html) as an argument, and adds some common elements to it such as the `--help` flag, automatically showing help if requested. View the API docs directly for more information. This is recommended over raw parsing/manipulation of command line flags. The following is an example of parsing command line flags pulled directly from the built-in Vagrant `destroy` command: ``` options = {} options[:force] = false opts = OptionParser.new do |o| o.banner = "Usage: vagrant destroy [vm-name]" o.separator "" o.on("-f", "--force", "Destroy without confirmation.") do |f| options[:force] = f end end # Parse the options argv = parse_options(opts) ``` Using Vagrant Machines ----------------------- The `with_target_vms` method is a helper that helps you interact with the machines that Vagrant manages in a standard Vagrant way. This method automatically does the right thing in the case of multi-machine environments, handling target machines on the command line (`vagrant foo my-vm`), etc. If you need to do any manipulation of a Vagrant machine, including SSH access, this helper should be used. An example of using the helper, again pulled directly from the built-in `destroy` command: ``` with_target_vms(argv, reverse: true) do |machine| machine.action(:destroy) end ``` In this case, it asks for the machines in reverse order and calls the destroy action on each of them. If a user says `vagrant destroy foo`, then the helper automatically only yields the `foo` machine. If no parameter is given and it is a multi-machine environment, every machine in the environment is yielded, and so on. It just does the right thing. Using the Raw Vagrant Environment ---------------------------------- The raw loaded `Vagrant::Environment` object is available with the '@env' instance variable. vagrant Plugin Development Basics Plugin Development Basics ========================== Plugins are a great way to augment or change the behavior and functionality of Vagrant. Since plugins introduce additional external dependencies for users, they should be used as a last resort when attempting to do something with Vagrant. But if you need to introduce custom behaviors into Vagrant, plugins are the best way, since they are safe against future upgrades and use a stable API. > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Plugins are written using [Ruby](https://www.ruby-lang.org/en/) and are packaged using [RubyGems](https://rubygems.org/). Familiarity with Ruby is required, but the [packaging and distribution](packaging) section should help guide you to packaging your plugin into a RubyGem. Setup and Workflow ------------------- Because plugins are packaged as RubyGems, Vagrant plugins should be developed as if you were developing a regular RubyGem. The easiest way to do this is to use the `bundle gem` command. Once the directory structure for a RubyGem is setup, you will want to modify your Gemfile. Here is the basic structure of a Gemfile for Vagrant plugin development: ``` source "https://rubygems.org" group :development do gem "vagrant", git: "https://github.com/hashicorp/vagrant.git" end group :plugins do gem "my-vagrant-plugin", path: "." end ``` This Gemfile gets "vagrant" for development. This allows you to `bundle exec vagrant` to run Vagrant with your plugin already loaded, so that you can test it manually that way. The only thing about this Gemfile that may stand out as odd is the "plugins" group and putting your plugin in that group. Because `vagrant plugin` commands do not work in development, this is how you "install" your plugin into Vagrant. Vagrant will automatically load any gems listed in the "plugins" group. Note that this also allows you to add multiple plugins to Vagrant for development, if your plugin works with another plugin. When you want to manually test your plugin, use `bundle exec vagrant` in order to run Vagrant with your plugin loaded (as we specified in the Gemfile). Plugin Definition ------------------ All plugins are required to have a definition. A definition contains details about the plugin such as the name of it and what components it contains. A definition at the bare minimum looks like the following: ``` class MyPlugin < Vagrant.plugin("2") name "My Plugin" end ``` A definition is a class that inherits from `Vagrant.plugin("2")`. The "2" there is the version that the plugin is valid for. API stability is only promised for each major version of Vagrant, so this is important. (The 1.x series is working towards 2.0, so the API version is "2") **The most critical feature of a plugin definition** is that it must *always* load, no matter what version of Vagrant is running. Theoretically, Vagrant version 87 (does not actually exist) would be able to load a version 2 plugin definition. This is achieved through clever lazy loading of individual components of the plugin, and is covered shortly. Plugin Components ------------------ Within the definition, a plugin advertises what components it adds to Vagrant. An example is shown below where a command and provisioner are added: ``` class MyPlugin < Vagrant.plugin("2") name "My Plugin" command "run-my-plugin" do require_relative "command" Command end provisioner "my-provisioner" do require_relative "provisioner" Provisioner end end ``` Let us go over the major pieces of what is going on here. Note from a general Ruby language perspective the above *should* be familiar. The syntax should not scare you. If it does, then please familiarize with Ruby further before attempting to write a plugin. The first thing to note is that individual components are defined by making a method call with the component name, such as `command` or `provisioner`. These in turn take some parameters. In the case of our example it is just the name of the command and the name of the provisioner. All component definitions then take a block argument (a callback) that must return the actual component implementation class. The block argument is where the "clever lazy loading" (mentioned above) comes into play. The component blocks should lazy load the actual file that contains the implementation of the component, and then return that component. This is done because the actual dependencies and APIs used when defining components are not stable across major Vagrant versions. A command implementation written for Vagrant 2.0 will not be compatible with Vagrant 3.0 and so on. But the *definition* is just plain Ruby that must always be forward compatible to future Vagrant versions. To repeat, **the lazy loading aspect of plugin components is critical** to the way Vagrant plugins work. All components must be lazily loaded and returned within their definition blocks. Now, each component has a different API. Please visit the relevant section using the navigation to the left under "Plugins" to learn more about developing each type of component. Error Handling --------------- One of Vagrant's biggest strength is gracefully handling errors and reporting them in human-readable ways. Vagrant has always strongly believed that if a user sees a stack trace, it is a bug. It is expected that plugins will behave the same way, and Vagrant provides strong error handling mechanisms to assist with this. Error handling in Vagrant is done entirely by raising Ruby exceptions. But Vagrant treats certain errors differently than others. If an error is raised that inherits from `Vagrant::Errors::VagrantError`, then the `vagrant` command will output the message of the error in nice red text to the console and exit with an exit status of 1. Otherwise, Vagrant reports an "unexpected error" that should be reported as a bug, and shows a full stack trace and other ugliness. Any stack traces should be considered bugs. Therefore, to fit into Vagrant's error handling mechanisms, subclass `VagrantError` and set a proper message on your exception. To see examples of this, look at Vagrant's [built-in errors](https://github.com/hashicorp/vagrant/blob/master/lib/vagrant/errors.rb). Console Input and Output ------------------------- Most plugins are likely going to want to do some sort of input/output. Plugins should *never* use Ruby's built-in `puts` or `gets` style methods. Instead, all input/output should go through some sort of Vagrant UI object. The Vagrant UI object properly handles cases where there is no TTY, output pipes are closed, there is no input pipe, etc. A UI object is available on every `Vagrant::Environment` via the `ui` property and is exposed within every middleware environment via the `:ui` key. UI objects have [decent documentation](https://github.com/hashicorp/vagrant/blob/master/lib/vagrant/ui.rb) within the comments of their source.
programming_docs
vagrant Plugin Development: Providers Plugin Development: Providers ============================== This page documents how to add support for new [providers](../providers/index) to Vagrant, allowing Vagrant to run and manage machines powered by a system other than VirtualBox. Prior to reading this, you should be familiar with the [plugin development basics](development-basics). Prior to developing a provider you should also be familiar with how [providers work](../providers/index) from a user standpoint. > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Example Provider: AWS ---------------------- The best way to learn how to write a provider is to see how one is written in practice. To augment this documentation, please heavily study the [vagrant-aws](https://github.com/mitchellh/vagrant-aws) plugin, which implements an AWS provider. The plugin is a good example of how to structure, test, and implement your plugin. Definition Component --------------------- Within the context of a plugin definition, new providers are defined like so: ``` provider "my_cloud" do require_relative "provider" Provider end ``` Providers are defined with the `provider` method, which takes a single argument specifying the name of the provider. This is the name that is used with `vagrant up` to specify the provider. So in the case above, our provider would be used by calling `vagrant up --provider=my_cloud`. The block argument then lazily loads and returns a class that implements the `Vagrant.plugin(2, :provider)` interface, which is covered next. Provider Class --------------- The provider class should subclass and implement `Vagrant.plugin(2, :provider)` which is an upgrade-safe way to let Vagrant return the proper parent class. This class and the methods that need to be implemented are [very well documented](https://github.com/hashicorp/vagrant/blob/master/lib/vagrant/plugin/v2/provider.rb). The documentation done on the class in the comments should be enough to understand what needs to be done. Viewing the [AWS provider class](https://github.com/mitchellh/vagrant-aws/blob/master/lib/vagrant-aws/provider.rb) as well as the [overall structure of the plugin](https://github.com/mitchellh/vagrant-aws) is recommended as a strong getting started point. Instead of going in depth over each method that needs to be implemented, the documentation will cover high-level but important points to help you create your provider. Box Format ----------- Each provider is responsible for having its own box format. This is actually an extremely simple step due to how generic boxes are. Before explaining you should get familiar with the general [box file format](../boxes/format). The only requirement for your box format is that the `metadata.json` file have a `provider` key which matches the name of your provider you chose above. In addition to this, you may put any data in the metadata as well as any files in the archive. Since Vagrant core itself does not care, it is up to your provider to handle the data of the box. Vagrant core just handles unpacking and verifying the box is for the proper provider. As an example of a couple box formats that are actually in use: * The `virtualbox` box format is just a flat directory of the contents of a `VBoxManage export` command. * The `vmware_fusion` box format is just a flat directory of the contents of a `vmwarevm` folder, but only including the bare essential files for VMware to function. * The `aws` box format is just a Vagrantfile defaulting some configuration. You can see an [example aws box unpacked here](https://github.com/mitchellh/vagrant-aws/tree/master/example_box). Before anything with your provider is even written, you can verify your box format works by doing `vagrant box add` with it. When you do a `vagrant box list` you can see what boxes for what providers are installed. You do *not need* the provider plugin installed to add a box for that provider. Actions -------- Probably the most important concept to understand when building a provider is the provider "action" interface. It is the secret sauce that makes providers do the magic they do. Actions are built on top of the concept of [middleware](https://github.com/mitchellh/middleware), which allow providers to execute multiple distinct steps, have error recovery mechanics, as well as before/after behaviors, and much more. Vagrant core requests specific actions from your provider through the `action` method on your provider class. The full list of actions requested is listed in the comments of that method on the superclass. If your provider does not implement a certain action, then Vagrant core will show a friendly error, so do not worry if you miss any, things will not explode or crash spectacularly. Take a look at how the VirtualBox provider [uses actions to build up complicated multi-step processes](https://github.com/hashicorp/vagrant/blob/master/plugins/providers/virtualbox/action.rb#L287). The AWS provider [uses a similar process](https://github.com/mitchellh/vagrant-aws/blob/master/lib/vagrant-aws/action.rb). Built-in Middleware -------------------- To assist with common tasks, Vagrant ships with a set of [built-in middleware](https://github.com/hashicorp/vagrant/tree/master/lib/vagrant/action/builtin). Each of the middleware is well commented on the behavior and options for each, and using these built-in middleware is critical to building a well-behaved provider. These built-in middleware can be thought of as a standard library for your actions on your provider. The core VirtualBox provider uses these built-in middleware heavily. Persisting State ----------------- In the process of creating and managing a machine, providers generally need to store some sort of state somewhere. Vagrant provides each machine with a directory to store this state. As a use-case example for this, the VirtualBox provider stores the UUID of the VirtualBox virtual machine created. This allows the provider to track whether the machine is created, running, suspended, etc. The VMware provider actually copies the entire virtual machine into this state directory, complete with virtual disk drives and everything. The directory is available from the `data_dir` attribute of the `Machine` instance given to initialize your provider. Within middleware actions, the machine is always available via the `:machine` key on the environment. The `data_dir` attribute is a Ruby [Pathname](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/pathname/rdoc/Pathname.html) object. It is important for providers to carefully manage all the contents of this directory. Vagrant core itself does little to clean up this directory. Therefore, when a machine is destroyed, be sure to clean up all the state from this directory. Configuration -------------- Vagrant supports [provider-specific configuration](../providers/configuration), allowing for users to finely tune and control specific providers from Vagrantfiles. It is easy for your custom provider to expose custom configuration as well. Provider-specific configuration is a special case of a normal [configuration plugin](configuration). When defining the configuration component, name the configuration the same as the provider, and as a second parameter, specify `:provider`, like so: ``` config("my_cloud", :provider) do require_relative "config" Config end ``` As long as the name matches your provider, and the second `:provider` parameter is given, Vagrant will automatically expose this as provider-specific configuration for your provider. Users can now do the following in their Vagrantfiles: ``` config.vm.provider :my_cloud do |config| # Your specific configuration! end ``` The configuration class returned from the `config` component in the plugin is the same as any other [configuration plugin](configuration), so read that page for more information. Vagrant automatically handles configuration validation and such just like any other configuration piece. The provider-specific configuration is available on the machine object via the `provider_config` attribute. So within actions or your provider class, you can access the config via `machine.provider_config`. > **Best practice:** Your provider should *not require* provider-specific configuration to function, if possible. Vagrant practices a strong [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) philosophy. When a user installs your provider, they should ideally be able to `vagrant up --provider=your_provider` and have it just work. > > Parallelization ---------------- Vagrant supports parallelizing some actions, such as `vagrant up`, if the provider explicitly supports it. By default, Vagrant will not parallelize a provider. When parallelization is enabled, multiple [actions](#actions) may be run in parallel. Therefore, providers must be certain that their action stacks are thread-safe. The core of Vagrant itself (such as box collections, SSH, etc.) is thread-safe. Providers can explicitly enable parallelization by setting the `parallel` option on the provider component: ``` provider("my_cloud", parallel: true) do require_relative "provider" Provider end ``` That is the only change that is needed to enable parallelization. vagrant Action Hooks Action Hooks ============= Action hooks provide ways to interact with Vagrant at a very low level by injecting middleware in various phases of Vagrant's lifecycle. This is an advanced option, even for plugin development. > **Warning: Advanced Topic!** Developing plugins is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Ruby should approach. > > Public Action Hooks -------------------- The following action hooks are available in the core of Vagrant. Please note that this list is not exhaustive and additional hooks can be added via plugins. * [`environment_plugins_loaded`](#environment_plugins_loaded) - called after the plugins have been loaded, but before the configurations, provisioners, providers, etc. are loaded. * [`environment_load`](#environment_load) - called after the environment and all configurations are fully loaded. * [`environment_unload`](#environment_unload) - called after the environment is done being used. The environment should not be used in this hook. * [`machine_action_boot`](#machine_action_boot) - called after the hypervisor has reported the machine was booted. * [`machine_action_config_validate`](#machine_action_config_validate) - called after all `Vagrantfile`s have been loaded, merged, and validated. * [`machine_action_destroy`](#machine_action_destroy) - called after the hypervisor has reported the virtual machine is down. * [`machine_action_halt`](#machine_action_halt) - called after the hypervisor has moved the machine into a halted state (usually "stopped" but not "terminated"). * [`machine_action_package`](#machine_action_package) - called after Vagrant has successfully packaged a new box. * [`machine_action_provision`](#machine_action_provision) - called after all provisioners have executed. * [`machine_action_read_state`](#machine_action_read_state) - called after Vagrant has loaded state from disk and the hypervisor. * [`machine_action_reload`](#machine_action_reload) - called after a virtual machine is reloaded (varies by hypervisor). * [`machine_action_resume`](#machine_action_resume) - called after a virtual machine is moved from the halted to up state. * [`machine_action_run_command`](#machine_action_run_command) - called after a command is executed on the machine. * [`machine_action_ssh`](#machine_action_ssh) - called after an SSH connection has been established. * [`machine_action_ssh_run`](#machine_action_ssh_run) - called after an SSH command is executed. * [`machine_action_start`](#machine_action_start) - called after the machine has been started. * [`machine_action_suspend`](#machine_action_suspend) - called after the machine has been suspended. * [`machine_action_sync_folders`](#machine_action_sync_folders) - called after synced folders have been set up. * [`machine_action_up`](#machine_action_up) - called after the machine has entered the up state. Private API ------------ You may find additional action hooks if you browse the Vagrant source code, but only the list of action hooks here are guaranteed to persist between Vagrant releases. Please do not rely on the internal API as it is subject to change without notice. vagrant Plugin Usage Plugin Usage ============= Installing a Vagrant plugin is easy, and should not take more than a few seconds. Please refer to the documentation of any plugin you plan on using for more information on how to use it, but there is one common method for installation and plugin activation. > **Warning!** 3rd party plugins can introduce instabilities into Vagrant due to the nature of them being written by non-core users. > > Installation ------------- Plugins are installed using `vagrant plugin install`: ``` # Installing a plugin from a known gem source $ vagrant plugin install my-plugin # Installing a plugin from a local file source $ vagrant plugin install /path/to/my-plugin.gem ``` Once a plugin is installed, it will automatically be loaded by Vagrant. Plugins which cannot be loaded should not crash Vagrant. Instead, Vagrant will show an error message that a plugin failed to load. Usage ------ Once a plugin is installed, you should refer to the plugin's documentation to see exactly how to use it. Plugins which add commands should be instantly available via `vagrant`, provisioners should be available via `config.vm.provision`, etc. **Note:** In the future, the `vagrant plugin` command will include a subcommand that will document the components that each plugin installs. Updating --------- Plugins can be updated by running `vagrant plugin update`. This will update every installed plugin to the latest version. You can update a specific plugin by calling `vagrant plugin update NAME`. Vagrant will output what plugins were updated and to what version. To determine the changes in a specific version of a plugin, refer to the plugin's homepage (usually a GitHub page or similar). It is the plugin author's responsibility to provide a change log if he or she chooses to. Uninstallation --------------- Uninstalling a plugin is as easy as installing it. Just use the `vagrant plugin uninstall` command and the plugin will be removed. Example: ``` $ vagrant plugin uninstall my-plugin ``` Listing Plugins ---------------- To view what plugins are installed into your Vagrant environment at any time, use the `vagrant plugin list` command. This will list the plugins that are installed along with their version. vagrant Vagrant Triggers Vagrant Triggers ================= As of version 2.1.0, Vagrant is capable of executing machine triggers *before* or *after* Vagrant commands. Each trigger is expected to be given a command key for when it should be fired during the Vagrant command lifecycle. These could be defined as a single key or an array which acts like a *whitelist* for the defined trigger. ``` # single command trigger config.trigger.after :up do |trigger| ... end # multiple commands for this trigger config.trigger.before [:up, :destroy, :halt, :package] do |trigger| ... end # or defined as a splat list config.trigger.before :up, :destroy, :halt, :package do |trigger| ... end ``` Alternatively, the key `:all` could be given which would run the trigger before or after every Vagrant command. If there is a command you don't want the trigger to run on, you can ignore that command with the `ignore` option. ``` # single command trigger config.trigger.before :all do |trigger| trigger.info = "Running a before trigger!" trigger.ignore = [:destroy, :halt] end ``` **Note:** *If a trigger is defined on a command that does not exist, a warning will be displayed.* Triggers can be defined as a block or hash in a Vagrantfile. The example below will result in the same trigger: ``` config.trigger.after :up do |trigger| trigger.name = "Finished Message" trigger.info = "Machine is up!" end config.trigger.after :up, name: "Finished Message", info: "Machine is up!" ``` Triggers can also be defined within the scope of guests in a Vagrantfile. These triggers will only run on the configured guest. An example of a guest only trigger: ``` config.vm.define "ubuntu" do |ubuntu| ubuntu.vm.box = "ubuntu" ubuntu.trigger.before :destroy do |trigger| trigger.warn = "Dumping database to /vagrant/outfile" trigger.run_remote = {inline: "pg_dump dbname > /vagrant/outfile"} end end ``` Global and machine-scoped triggers will execute in the order that they are defined within a Vagrantfile. Take for example an abstracted Vagrantfile: ``` Vagrantfile global trigger 1 global trigger 2 machine defined machine trigger 3 global trigger 4 end ``` In this generic case, the triggers would fire in the order: 1 -> 2 -> 3 -> 4 For more information about what options are available for triggers, see the [configuration section](configuration). vagrant Configuration Configuration ============== Vagrant Triggers has a few options to define trigger behavior. Options -------- The trigger class takes various options. * [`action`](#action) (symbol, array) - Expected to be a single symbol value, an array of symbols, or a *splat* of symbols. The first argument that comes after either **before** or **after** when defining a new trigger. Can be any valid Vagrant command. It also accepts a special value `:all` which will make the trigger fire for every action. An action can be ignored with the `ignore` setting if desired. These are the valid action commands for triggers: + [`destroy`](#destroy) + [`halt`](#halt) + [`provision`](#provision) + [`reload`](#reload) + [`resume`](#resume) + [`suspend`](#suspend) + [`up`](#up) * [`ignore`](#ignore) (symbol, array) - Symbol or array of symbols corresponding to the action that a trigger should not fire on. * [`info`](#info) (string) - A message that will be printed at the beginning of a trigger. * [`name`](#name) (string) - The name of the trigger. If set, the name will be displayed when firing the trigger. * [`on_error`](#on_error) (symbol) - Defines how the trigger should behave if it encounters an error. By default this will be `:halt`, but can be configured to ignore failures and continue on with `:continue`. * [`only_on`](#only_on) (string, regex, array) - Limit the trigger to these guests. Values can be a string or regex that matches a guest name. * [`ruby`](#ruby) (block) - A block of Ruby code to be executed on the host. The block accepts two arguments that can be used with your Ruby code: `env` and `machine`. These options correspond to the Vagrant environment used (note: these are not your shell's environment variables), and the Vagrant guest machine that the trigger is firing on. This option can only be a `Proc` type, which must be explicitly called out when using the hash syntax for a trigger. ``` ubuntu.trigger.after :up do |trigger| trigger.info = "More information" trigger.ruby do |env,machine| greetings = "hello there #{machine.id}!" puts greetings end end ``` * [`run_remote`](#run_remote) (hash) - A collection of settings to run a inline or remote script with on the guest. These settings correspond to the [shell provisioner](../provisioning/shell). * [`run`](#run) (hash) - A collection of settings to run a inline or remote script on the host. These settings correspond to the [shell provisioner](../provisioning/shell). However, at the moment the only settings `run` takes advantage of are: + [`args`](#args) + [`inline`](#inline) + [`path`](#path) * [`warn`](#warn) (string) - A warning message that will be printed at the beginning of a trigger. * [`exit_codes`](#exit_codes) (integer, array) - A set of acceptable exit codes to continue on. Defaults to `0` if option is absent. For now only valid with the `run` option. * [`abort`](#abort) (integer,boolean) - An option that will exit the running Vagrant process once the trigger fires. If set to `true`, Vagrant will use exit code 1. Otherwise, an integer can be provided and Vagrant will it as its exit code when aborting.
programming_docs
vagrant Basic Usage Basic Usage ============ Below are some very simple examples of how to use Vagrant Triggers. Examples --------- The following is a basic example of two global triggers. One that runs *before* the `:up` command and one that runs *after* the `:up` command: ``` Vagrant.configure("2") do |config| config.trigger.before :up do |trigger| trigger.name = "Hello world" trigger.info = "I am running before vagrant up!!" end config.trigger.after :up do |trigger| trigger.name = "Hello world" trigger.info = "I am running after vagrant up!!" end config.vm.define "ubuntu" do |ubuntu| ubuntu.vm.box = "ubuntu" end end ``` These will run before and after each defined guest in the Vagrantfile. Running a remote script to save a database on your host before **destroy**ing a guest: ``` Vagrant.configure("2") do |config| config.vm.define "ubuntu" do |ubuntu| ubuntu.vm.box = "ubuntu" ubuntu.trigger.before :destroy do |trigger| trigger.warn = "Dumping database to /vagrant/outfile" trigger.run_remote = {inline: "pg_dump dbname > /vagrant/outfile"} end end end ``` Now that the trigger is defined, running the **destroy** command will fire off the defined trigger before Vagrant destroys the machine. ``` $ vagrant destroy ubuntu ``` An example of defining three triggers that start and stop tinyproxy on your host machine using homebrew: ``` #/bin/bash # start-tinyproxy.sh brew services start tinyproxy ``` ``` #/bin/bash # stop-tinyproxy.sh brew services stop tinyproxy ``` ``` Vagrant.configure("2") do |config| config.vm.define "ubuntu" do |ubuntu| ubuntu.vm.box = "ubuntu" ubuntu.trigger.before :up do |trigger| trigger.info = "Starting tinyproxy..." trigger.run = {path: "start-tinyproxy.sh"} end ubuntu.trigger.after :destroy, :halt do |trigger| trigger.info = "Stopping tinyproxy..." trigger.run = {path: "stop-tinyproxy.sh"} end end end ``` Running `vagrant up` would fire the before trigger to start tinyproxy, where as running either `vagrant destroy` or `vagrant halt` would stop tinyproxy. ### Ruby Option Triggers can also be defined to run Ruby, rather than bash or powershell. An example of this might be using a Ruby option to get more information from the `VBoxManage` tool. In this case, we are printing the `ostype` defined for thte guest after it has been brought up. ``` Vagrant.configure("2") do |config| config.vm.define "ubuntu" do |ubuntu| ubuntu.vm.box = "ubuntu" ubuntu.trigger.after :up do |trigger| trigger.info = "More information with ruby magic" trigger.ruby do |env,machine| puts `VBoxManage showvminfo #{machine.id} --machinereadable | grep ostype` end end end end ``` If you are defining your triggers using the hash syntax, you must use the `Proc` type for defining a ruby trigger. ``` Vagrant.configure("2") do |config| config.vm.define "ubuntu" do |ubuntu| ubuntu.vm.box = "ubuntu" ubuntu.trigger.after :up, info: "More information with ruby magic", ruby: proc{|env,machine| puts `VBoxManage showvminfo #{machine.id} --machinereadable | grep ostype`} end end ``` vagrant Networking Networking =========== In order to access the Vagrant environment created, Vagrant exposes some high-level networking options for things such as forwarded ports, connecting to a public network, or creating a private network. The high-level networking options are meant to define an abstraction that works across multiple [providers](../providers/index). This means that you can take your Vagrantfile you used to spin up a VirtualBox machine and you can reasonably expect that Vagrantfile to behave the same with something like VMware. You should first read the [basic usage](basic_usage) page and then continue by reading the documentation for a specific networking primitive by following the navigation to the left. Advanced Configuration ----------------------- In some cases, these options are *too* high-level, and you may want to more finely tune and configure the network interfaces of the underlying machine. Most providers expose [provider-specific configuration](../providers/configuration) to do this, so please read the documentation for your specific provider to see what options are available. > **For beginners:** It is strongly recommended you use only the high-level networking options until you are comfortable with the Vagrant workflow and have things working at a basic level. Provider-specific network configuration can very quickly lock you out of your guest machine if improperly done. > > vagrant Public Networks Public Networks ================ **Network identifier: `public_network`** Vagrant public networks are less private than private networks, and the exact meaning actually varies from [provider to provider](../providers/index), hence the ambiguous definition. The idea is that while [private networks](private_network) should never allow the general public access to your machine, public networks can. > **Confused?** We kind of are, too. It is likely that public networks will be replaced by `:bridged` in a future release, since that is in general what should be done with public networks, and providers that do not support bridging generally do not have any other features that map to public networks either. > > > **Warning!** Vagrant boxes are insecure by default and by design, featuring public passwords, insecure keypairs for SSH access, and potentially allow root access over SSH. With these known credentials, your box is easily accessible by anyone on your network. Before configuring Vagrant to use a public network, consider *all* potential security implications and review the [default box configuration](../boxes/base) to identify potential security risks. > > DHCP ----- The easiest way to use a public network is to allow the IP to be assigned via DHCP. In this case, defining a public network is trivially easy: ``` Vagrant.configure("2") do |config| config.vm.network "public_network" end ``` When DHCP is used, the IP can be determined by using `vagrant ssh` to SSH into the machine and using the appropriate command line tool to find the IP, such as `ifconfig`. ### Using the DHCP Assigned Default Route Some cases require the DHCP assigned default route to be untouched. In these cases one may specify the `use_dhcp_assigned_default_route` option. As an example: ``` Vagrant.configure("2") do |config| config.vm.network "public_network", use_dhcp_assigned_default_route: true end ``` Static IP ---------- Depending on your setup, you may wish to manually set the IP of your bridged interface. To do so, add a `:ip` clause to the network definition. ``` config.vm.network "public_network", ip: "192.168.0.17" ``` Default Network Interface -------------------------- If more than one network interface is available on the host machine, Vagrant will ask you to choose which interface the virtual machine should bridge to. A default interface can be specified by adding a `:bridge` clause to the network definition. ``` config.vm.network "public_network", bridge: "en1: Wi-Fi (AirPort)" ``` The string identifying the desired interface must exactly match the name of an available interface. If it cannot be found, Vagrant will ask you to pick from a list of available network interfaces. With some providers, it is possible to specify a list of adapters to bridge against: ``` config.vm.network "public_network", bridge: [ "en1: Wi-Fi (AirPort)", "en6: Broadcom NetXtreme Gigabit Ethernet Controller", ] ``` In this example, the first network adapter that exists and can successfully be bridge will be used. Disable Auto-Configuration --------------------------- If you want to manually configure the network interface yourself, you can disable auto-configuration by specifying `auto_config`: ``` Vagrant.configure("2") do |config| config.vm.network "public_network", auto_config: false end ``` Then the shell provisioner can be used to configure the ip of the interface: ``` Vagrant.configure("2") do |config| config.vm.network "public_network", auto_config: false # manual ip config.vm.provision "shell", run: "always", inline: "ifconfig eth1 192.168.0.17 netmask 255.255.255.0 up" # manual ipv6 config.vm.provision "shell", run: "always", inline: "ifconfig eth1 inet6 add fc00::17/7" end ``` Default Router --------------- Depending on your setup, you may wish to manually override the default router configuration. This is required if you need access the Vagrant box from other networks over the public network. To do so, you can use a shell provisioner script: ``` Vagrant.configure("2") do |config| config.vm.network "public_network", ip: "192.168.0.17" # default router config.vm.provision "shell", run: "always", inline: "route add default gw 192.168.0.1" # default router ipv6 config.vm.provision "shell", run: "always", inline: "route -A inet6 add default gw fc00::1 eth1" # delete default gw on eth0 config.vm.provision "shell", run: "always", inline: "eval `route -n | awk '{ if ($8 ==\"eth0\" && $2 != \"0.0.0.0\") print \"route del default gw \" $2; }'`" end ``` Note the above is fairly complex and may be guest OS specific, but we document the rough idea of how to do it because it is a common question. vagrant Private Networks Private Networks ================= **Network identifier: `private_network`** Vagrant private networks allow you to access your guest machine by some address that is not publicly accessible from the global internet. In general, this means your machine gets an address in the [private address space](https://en.wikipedia.org/wiki/Private_network#Private_IPv4_address_spaces). Multiple machines within the same private network (also usually with the restriction that they're backed by the same [provider](../providers/index)) can communicate with each other on private networks. > **Guest operating system support.** Private networks generally require configuring the network adapters on the guest machine. This process varies from OS to OS. Vagrant ships with knowledge of how to configure networks on a variety of guest operating systems, but it is possible if you are using a particularly old or new operating system that private networks will not properly configure. > > DHCP ----- The easiest way to use a private network is to allow the IP to be assigned via DHCP. ``` Vagrant.configure("2") do |config| config.vm.network "private_network", type: "dhcp" end ``` This will automatically assign an IP address from the reserved address space. The IP address can be determined by using `vagrant ssh` to SSH into the machine and using the appropriate command line tool to find the IP, such as `ifconfig`. Static IP ---------- You can also specify a static IP address for the machine. This lets you access the Vagrant managed machine using a static, known IP. The Vagrantfile for a static IP looks like this: ``` Vagrant.configure("2") do |config| config.vm.network "private_network", ip: "192.168.50.4" end ``` It is up to the users to make sure that the static IP does not collide with any other machines on the same network. While you can choose any IP you would like, you *should* use an IP from the [reserved private address space](https://en.wikipedia.org/wiki/Private_network#Private_IPv4_address_spaces). These IPs are guaranteed to never be publicly routable, and most routers actually block traffic from going to them from the outside world. For some operating systems, additional configuration options for the static IP address are available such as setting the default gateway or MTU. > **Warning!** Do not choose an IP that overlaps with any other IP space on your system. This can cause the network to not be reachable. > > IPv6 ----- You can specify a static IP via IPv6. DHCP for IPv6 is not supported. To use IPv6, just specify an IPv6 address as the IP: ``` Vagrant.configure("2") do |config| config.vm.network "private_network", ip: "fde4:8dba:82e1::c4" end ``` This will assign that IP to the machine. The entire `/64` subnet will be reserved. Please make sure to use the reserved local addresses approved for IPv6. You can also modify the prefix length by changing the `netmask` option (defaults to 64): ``` Vagrant.configure("2") do |config| config.vm.network "private_network", ip: "fde4:8dba:82e1::c4", netmask: "96" end ``` IPv6 supports for private networks was added in Vagrant 1.7.5 and may not work with every provider. Disable Auto-Configuration --------------------------- If you want to manually configure the network interface yourself, you can disable Vagrant's auto-configure feature by specifying `auto_config`: ``` Vagrant.configure("2") do |config| config.vm.network "private_network", ip: "192.168.50.4", auto_config: false end ``` If you already started the Vagrant environment before setting `auto_config`, the files it initially placed there will stay there. You will have to remove those files manually or destroy and recreate the machine. The files created by Vagrant depend on the OS. For example, for many Linux distros, this is `/etc/network/interfaces`. In general you should look in the normal location that network interfaces are configured for your distro. vagrant Forwarded Ports Forwarded Ports ================ **Network identifier: `forwarded_port`** Vagrant forwarded ports allow you to access a port on your host machine and have all data forwarded to a port on the guest machine, over either TCP or UDP. For example: If the guest machine is running a web server listening on port 80, you can make a forwarded port mapping to port 8080 (or anything) on your host machine. You can then open your browser to `localhost:8080` and browse the website, while all actual network data is being sent to the guest. Defining a Forwarded Port -------------------------- The forwarded port configuration expects two parameters, the port on the guest and the port on the host. Example: ``` Vagrant.configure("2") do |config| config.vm.network "forwarded_port", guest: 80, host: 8080 end ``` This will allow accessing port 80 on the guest via port 8080 on the host. For most providers, forwarded ports by default bind to all interfaces. This means that other devices on your network can access the forwarded ports. If you want to restrict access, see the `guest_ip` and `host_ip` settings below. Options Reference ------------------ This is a complete list of the options that are available for forwarded ports. Only the `guest` and `host` options are required. Below this section, there are more detailed examples of using these options. * [`auto_correct`](#auto_correct) (boolean) - If true, the host port will be changed automatically in case it collides with a port already in use. By default, this is false. * [`guest`](#guest) (int) - The port on the guest that you want to be exposed on the host. This can be any port. * [`guest_ip`](#guest_ip) (string) - The guest IP to bind the forwarded port to. If this is not set, the port will go to every IP interface. By default, this is empty. * [`host`](#host) (int) - The port on the host that you want to use to access the port on the guest. This must be greater than port 1024 unless Vagrant is running as root (which is not recommended). * [`host_ip`](#host_ip) (string) - The IP on the host you want to bind the forwarded port to. If not specified, it will be bound to every IP. By default, this is empty. * [`protocol`](#protocol) (string) - Either "udp" or "tcp". This specifies the protocol that will be allowed through the forwarded port. By default this is "tcp". * [`id`](#id) (string) - Name of the rule (can be visible in VirtualBox). By default this is "protocol""guest" (example : "tcp123"). Forwarded Port Protocols ------------------------- By default, any defined port will only forward the TCP protocol. As an optional third parameter, you may specify `protocol: 'udp'` in order to pass UDP traffic. If a given port needs to be able to listen to the same port on both protocols, you must define the port twice with each protocol specified, like so: ``` Vagrant.configure("2") do |config| config.vm.network "forwarded_port", guest: 2003, host: 12003, protocol: "tcp" config.vm.network "forwarded_port", guest: 2003, host: 12003, protocol: "udp" end ``` Port Collisions and Correction ------------------------------- It is common when running multiple Vagrant machines to unknowingly create forwarded port definitions that collide with each other (two separate Vagrant projects forwarded to port 8080, for example). Vagrant includes built-in mechanism to detect this and correct it, automatically. Port collision detection is always done. Vagrant will not allow you to define a forwarded port where the port on the host appears to be accepting traffic or connections. Port collision auto-correction must be manually enabled for each forwarded port, since it is often surprising when it occurs and can lead the Vagrant user to think that the port was not properly forwarded. Enabling auto correct is easy: ``` Vagrant.configure("2") do |config| config.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true end ``` The final `:auto_correct` parameter set to true tells Vagrant to auto correct any collisions. During a `vagrant up` or `vagrant reload`, Vagrant will output information about any collisions detections and auto corrections made, so you can take notice and act accordingly. You can define allowed port range assignable by Vagrant when port collision is detected via [config.vm.usable\_port\_range](../vagrantfile/machine_settings) property. ``` Vagrant.configure("2") do |config| config.vm.usable_port_range = 8000..8999 end ``` vagrant Basic Usage of Networking Basic Usage of Networking ========================== Vagrant offers multiple options for how you are able to connect your guest machines to the network, but there is a standard usage pattern as well as some points common to all network configurations that are important to know. Configuration -------------- All networks are configured within your [Vagrantfile](../vagrantfile/index) using the `config.vm.network` method call. For example, the Vagrantfile below defines some port forwarding: ``` Vagrant.configure("2") do |config| # ... config.vm.network "forwarded_port", guest: 80, host: 8080 end ``` Every network type has an identifier such as `"forwarded_port"` in the above example. Following this is a set of configuration arguments that can differ for each network type. In the case of forwarded ports, two numeric arguments are expected: the port on the guest followed by the port on the host that the guest port can be accessed by. Multiple Networks ------------------ Multiple networks can be defined by having multiple `config.vm.network` calls within the Vagrantfile. The exact meaning of this can differ for each [provider](../providers/index), but in general the order specifies the order in which the networks are enabled. Enabling Networks ------------------ Networks are automatically configured and enabled after they've been defined in the Vagrantfile as part of the `vagrant up` or `vagrant reload` process. vagrant Providers Providers ========== While Vagrant ships out of the box with support for [VirtualBox](https://www.virtualbox.org), [Hyper-V](https://www.microsoft.com/hyper-v), and [Docker](https://www.docker.io), Vagrant has the ability to manage other types of machines as well. This is done by using other *providers* with Vagrant. Alternate providers can offer different features that make more sense in your use case. For example, if you are using Vagrant for any real work, [VMware](https://www.vmware.com) providers are recommended since they're well supported and generally more stable and performant than VirtualBox. Before you can use another provider, you must install it. Installation of other providers is done via the Vagrant plugin system. Once the provider is installed, usage is straightforward and simple, as you would expect with Vagrant. Read into the relevant subsections found in the navigation to the left for more information.
programming_docs
vagrant Default Provider Default Provider ================= By default, VirtualBox is the default provider for Vagrant. VirtualBox is still the most accessible platform to use Vagrant: it is free, cross-platform, and has been supported by Vagrant for years. With VirtualBox as the default provider, it provides the lowest friction for new users to get started with Vagrant. However, you may find after using Vagrant for some time that you prefer to use another provider as your default. In fact, this is quite common. To make this experience better, Vagrant allows specifying the default provider to use by setting the `VAGRANT_DEFAULT_PROVIDER` environmental variable. Just set `VAGRANT_DEFAULT_PROVIDER` to the provider you wish to be the default. For example, if you use Vagrant with VMware Fusion, you can set the environmental variable to `vmware_fusion` and it will be your default. vagrant Custom Provider Custom Provider ================ To learn how to make your own custom Vagrant providers, read the Vagrant plugin development guide on [creating custom providers](../plugins/providers). vagrant Configuration Configuration ============== While well-behaved Vagrant providers should work with any Vagrantfile with sane defaults, providers generally expose unique configuration options so that you can get the most out of each provider. This provider-specific configuration is done within the Vagrantfile in a way that is portable, easy to use, and easy to understand. Portability ------------ An important fact is that even if you configure other providers within a Vagrantfile, the Vagrantfile remains portable even to individuals who do not necessarily have that provider installed. For example, if you configure VMware Fusion and send it to an individual who does not have the VMware Fusion provider, Vagrant will silently ignore that part of the configuration. Provider Configuration ----------------------- Configuring a specific provider looks like this: ``` Vagrant.configure("2") do |config| # ... config.vm.provider "virtualbox" do |vb| vb.customize ["modifyvm", :id, "--cpuexecutioncap", "50"] end end ``` Multiple `config.vm.provider` blocks can exist to configure multiple providers. The configuration format should look very similar to how provisioners are configured. The `config.vm.provider` takes a single parameter: the name of the provider being configured. Then, an inner block with custom configuration options is exposed that can be used to configure that provider. This inner configuration differs among providers, so please read the documentation for your provider of choice to see available configuration options. Remember, some providers do not require any provider-specific configuration and work directly out of the box. Provider-specific configuration is meant as a way to expose more options to get the most of the provider of your choice. It is not meant as a roadblock to running against a specific provider. Overriding Configuration ------------------------- Providers can also override non-provider specific configuration, such as `config.vm.box` and any other Vagrant configuration. This is done by specifying a second argument to `config.vm.provider`. This argument is just like the normal `config`, so set any settings you want, and they will be overridden only for that provider. Example: ``` Vagrant.configure("2") do |config| config.vm.box = "precise64" config.vm.provider "vmware_fusion" do |v, override| override.vm.box = "precise64_fusion" end end ``` In the above case, Vagrant will use the "precise64" box by default, but will use "precise64\_fusion" if the VMware Fusion provider is used. > **The Vagrant Way:** The proper "Vagrant way" is to avoid any provider-specific overrides if possible by making boxes for multiple providers that are as identical as possible, since box names can map to multiple providers. However, this is not always possible, and in those cases, overrides are available. > > vagrant Provider Installation Provider Installation ====================== Providers are distributed as Vagrant plugins, and are therefore installed using [standard plugin installation steps](../plugins/usage). After installing a plugin which contains a provider, the provider should immediately be available. vagrant Basic Provider Usage Basic Provider Usage ===================== Boxes ------ Vagrant boxes are all provider-specific. A box for VirtualBox is incompatible with the VMware Fusion provider, or any other provider. A box must be installed for each provider, and can share the same name as other boxes as long as the providers differ. So you can have both a VirtualBox and VMware Fusion "precise64" box. Installing boxes has not changed at all: ``` $ vagrant box add hashicorp/precise64 ``` Vagrant now automatically detects what provider a box is for. This is visible when listing boxes. Vagrant puts the provider in parentheses next to the name, as can be seen below. ``` $ vagrant box list precise64 (virtualbox) precise64 (vmware_fusion) ``` Vagrant Up ----------- Once a provider is installed, you can use it by calling `vagrant up` with the `--provider` flag. This will force Vagrant to use that specific provider. No other configuration is necessary! In normal day-to-day usage, the `--provider` flag is not necessary since Vagrant can usually pick the right provider for you. More details on how it does this is below. ``` $ vagrant up --provider=vmware_fusion ``` If you specified a `--provider` flag, you only need to do this for the `up` command. Once a machine is up and running, Vagrant is able to see what provider is backing a running machine, so commands such as `destroy`, `suspend`, etc. do not need to be told what provider to use. > Vagrant currently restricts you to bringing up one provider per machine. If you have a multi-machine environment, you can bring up one machine backed by VirtualBox and another backed by VMware Fusion, for example, but you cannot back the *same machine* with both VirtualBox and VMware Fusion. This is a limitation that will be removed in a future version of Vagrant. > > Default Provider ----------------- As mentioned earlier, you typically do not need to specify `--provider` *ever*. Vagrant is smart enough about being able to detect the provider you want for a given environment. Vagrant attempts to find the default provider in the following order: 1. The `--provider` flag on a `vagrant up` is chosen above all else, if it is present. 2. If the `VAGRANT_DEFAULT_PROVIDER` environmental variable is set, it takes next priority and will be the provider chosen. 3. Vagrant will go through all of the `config.vm.provider` calls in the Vagrantfile and try each in order. It will choose the first provider that is usable. For example, if you configure Hyper-V, it will never be chosen on Mac this way. It must be both configured and usable. 4. Vagrant will go through all installed provider plugins (including the ones that come with Vagrant), and find the first plugin that reports it is usable. There is a priority system here: systems that are known better have a higher priority than systems that are worse. For example, if you have the VMware provider installed, it will always take priority over VirtualBox. 5. If Vagrant still has not found any usable providers, it will error. Using this method, there are very few cases that Vagrant does not find the correct provider for you. This also allows each [Vagrantfile](../vagrantfile/index) to define what providers the development environment is made for by ordering provider configurations. A trick is to use `config.vm.provider` with no configuration at the top of your Vagrantfile to define the order of providers you prefer to support: ``` Vagrant.configure("2") do |config| # ... other config up here # Prefer VMware Fusion before VirtualBox config.vm.provider "vmware_fusion" config.vm.provider "virtualbox" end ``` vagrant Vagrant and Windows Subsystem for Linux Vagrant and Windows Subsystem for Linux ======================================== Recent versions of Windows 10 now include Windows Subsystem for Linux (WSL) as an optional Windows feature. The WSL supports running a Linux environment within Windows. Vagrant support for WSL is still in development and should be considered *beta*. > **Warning: Advanced Topic!** Using Vagrant within the Windows Subsystem for Linux is an advanced topic that only experienced Vagrant users who are reasonably comfortable with Windows, WSL, and Linux should approach. > > Vagrant Installation --------------------- Vagrant *must* be installed within the Linux distribution used with WSL. While the `vagrant.exe` executable provided by the Vagrant Windows installation is accessible from within the WSL, it will not function as expected. Download the installer package for the Linux distribution from the releases page and install Vagrant. \_*NOTE: When Vagrant is installed on the Windows system the version installed within the Linux distribution *must* match.* Vagrant Usage ============== Windows Access --------------- By default Vagrant will not access features available on the Windows system from within the WSL. This means the VirtualBox and Hyper-V providers will not be available. To enable Windows access, which will also enable the VirtualBox and Hyper-V providers, set the `VAGRANT_WSL_ENABLE_WINDOWS_ACCESS` environment variable: ``` $ export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS="1" ``` When Windows access is enabled Vagrant will automatically adjust `VAGRANT_HOME` to be located on the Windows host. This is required to ensure `VAGRANT_HOME` is located on a DrvFs file system. PATH modifications ------------------- Vagrant will detect when it is being run within the WSL and adjust how it locates and executes third party executables. For example, when using the VirtualBox provider Vagrant will interact with VirtualBox installed on the Windows system, not within the WSL. It is important to ensure that any required Windows executable is available within your `PATH` to allow Vagrant to access them. For example, when using the VirtualBox provider: ``` export PATH="$PATH:/mnt/c/Program Files/Oracle/VirtualBox" ``` Synced Folders --------------- Support for synced folders within the WSL is implementation dependent. In most cases synced folders will not be supported when running Vagrant within WSL on a VolFs file system. Synced folder implementations must "opt-in" to supporting usage from VolFs file systems. To use synced folders from within the WSL that do not support VolFs file systems, move the Vagrant project directory to a DrvFs file system location (/mnt/c/ prefixed path for example). Windows Access --------------- Working within the WSL provides a layer of isolation from the actual Windows system. In most cases Vagrant will need access to the actual Windows system to function correctly. As most Vagrant providers will need to be installed on Windows directly (not within the WSL) Vagrant will require Windows access. Access to the Windows system is controlled via an environment variable: `VAGRANT_WSL_ENABLE_WINDOWS_ACCESS`. If this environment variable is set, Vagrant will access the Windows system to run executables and enable things like synced folders. When running in a bash shell within WSL, the environment variable can be setup like so: ``` $ export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS="1" ``` This will enable Vagrant to access the Windows system outside of the WSL and properly interact with Windows executables. This will automatically modify the `VAGRANT_HOME` environment variable if it is not already defined, setting it to be within the user's home directory on Windows. It is important to note that paths shared with the Windows system will not have Linux permissions enforced. For example, when a directory within the WSL is synced to a guest using the VirtualBox provider, any local permissions defined on that directory (or its contents) will not be visible from the guest. Likewise, any files created from the guest within the synced folder will be world readable/writeable in WSL. Other useful WSL related environment variables: * [`VAGRANT_WSL_WINDOWS_ACCESS_USER`](#vagrant_wsl_windows_access_user) - Override current Windows username * [`VAGRANT_WSL_DISABLE_VAGRANT_HOME`](#vagrant_wsl_disable_vagrant_home) - Do not modify the `VAGRANT_HOME` variable * [`VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH`](#vagrant_wsl_windows_access_user_home_path) - Custom Windows system home path If a Vagrant project directory is not within the user's home directory on the Windows system, certain actions that include permission checks may fail (like `vagrant ssh`). When accessing Vagrant projects outside the WSL Vagrant will skip these permission checks when the project path is within the path defined in the `VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH` environment variable. For example, if a user wants to run a Vagrant project from the WSL that is located at `C:\TestDir\vagrant-project`: ``` C:\Users\vagrant> cd C:\TestDir\vagrant-project C:\TestDir\vagrant-project> bash vagrant@vagrant-10:/mnt/c/TestDir/vagrant-project$ export VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH="/mnt/c/TestDir" vagrant@vagrant-10:/mnt/c/TestDir/vagrant-project$ vagrant ssh ``` Using Docker ------------- The docker daemon cannot be run inside the Windows Subsystem for Linux. However, the daemon *can* be run on Windows and accessed by Vagrant while running in the WSL. Once docker is installed and running on Windows, export the following environment variable to give Vagrant access: ``` vagrant@vagrant-10:/mnt/c/Users/vagrant$ export DOCKER_HOST=tcp://127.0.0.1:2375 ``` vagrant Other Other ====== This section covers other information that does not quite fit under the other categories. * [Debugging](debugging) * [Environment Variables](environmental-variables) vagrant Debugging Debugging ========== As much as we try to keep Vagrant stable and bug free, it is inevitable that issues will arise and Vagrant will behave in unexpected ways. When using these support channels, it is generally helpful to include debugging logs along with any error reports. These logs can often help you troubleshoot any problems you may be having. > **Scan for sensitive information!** Vagrant debug logs include information about your system including environment variables and user information. If you store sensitive information in the environment or in your user account, please scan or scrub the debug log of this information before uploading the contents to the public Internet. > > > **Submit debug logs using GitHub Gist.** If you plan on submitting a bug report or issue that includes debug-level logs, please use a service like [Gist](https://gist.github.com). **Do not** paste the raw debug logs into an issue as it makes it very difficult to scroll and parse the information. > > To enable detailed logging, set the `VAGRANT_LOG` environmental variable to the desired log level name, which is one of `debug` (loud), `info` (normal), `warn` (quiet), and `error` (very quiet). When asking for support, please set this to `debug`. When troubleshooting your own issues, you should start with `info`, which is much quieter, but contains important information about the behavior of Vagrant. On Linux and Mac systems, this can be done by prepending the `vagrant` command with an environmental variable declaration: ``` $ VAGRANT_LOG=info vagrant up ``` On Windows, multiple steps are required: ``` $ set VAGRANT_LOG=info $ vagrant up ``` You can also get the debug level output using the `--debug` command line option. For example: ``` $ vagrant up --debug ``` On Linux and Mac, if you are saving the output to a file, you may need to redirect stderr and stdout using `&>`: ``` $ vagrant up --debug &> vagrant.log ``` On Windows in PowerShell (outputs to log and screen): ``` $ vagrant up --debug 2>&1 | Tee-Object -FilePath ".\vagrant.log" ``` vagrant Environmental Variables Environmental Variables ======================== Vagrant has a set of environmental variables that can be used to configure and control it in a global way. This page lists those environmental variables. `VAGRANT_ALIAS_FILE` --------------------- `VAGRANT_ALIAS_FILE` can be set to change the file where Vagrant aliases are defined. By default, this is set to `~/.vagrant.d/aliases`. `VAGRANT_DEBUG_LAUNCHER` ------------------------- For performance reasons, especially for Windows users, Vagrant uses a static binary to launch the actual Vagrant process. If you have *very* early issues when launching Vagrant from the official installer, you can specify the `VAGRANT_DEBUG_LAUNCHER` environment variable to output debugging information about the launch process. `VAGRANT_DEFAULT_PROVIDER` --------------------------- This configures the default provider Vagrant will use. This normally does not need to be set since Vagrant is fairly intelligent about how to detect the default provider. By setting this, you will force Vagrant to use this provider for any *new* Vagrant environments. Existing Vagrant environments will continue to use the provider they came `up` with. Once you `vagrant destroy` existing environments, this will take effect. `VAGRANT_DEFAULT_TEMPLATE` --------------------------- This configures the template used by `vagrant init` when the `--template` option is not provided. `VAGRANT_PREFERRED_PROVIDERS` ------------------------------ This configures providers that Vagrant should prefer. Much like the `VAGRANT_DEFAULT_PROVIDER` this environment variable normally does not need to be set. By setting this you will instruct Vagrant to *prefer* providers defined in this environment variable for any *new* Vagrant environments. Existing Vagrant environments will continue to use the provider they came `up` with. Once you `vagrant destroy` existing environments, this will take effect. A single provider can be defined within this environment variable or a comma delimited list of providers. `VAGRANT_BOX_UPDATE_CHECK_DISABLE` ----------------------------------- By default, Vagrant will query the metadata API server to see if a newer box version is available for download. This optional can be disabled on a per-Vagrantfile basis with `config.vm.box_check_update`, but it can also be disabled globally setting `VAGRANT_BOX_UPDATE_CHECK_DISABLE` to any non-empty value. This option will not affect global box functions like `vagrant box update`. `VAGRANT_CHECKPOINT_DISABLE` ----------------------------- Vagrant does occasional network calls to check whether the version of Vagrant that is running locally is up to date. We understand that software making remote calls over the internet for any reason can be undesirable. To suppress these calls, set the environment variable `VAGRANT_CHECKPOINT_DISABLE` to any non-empty value. If you use other HashiCorp tools like Packer and would prefer to configure this setting only once, you can set `CHECKPOINT_DISABLE` instead. `VAGRANT_CWD` -------------- `VAGRANT_CWD` can be set to change the working directory of Vagrant. By default, Vagrant uses the current directory you are in. The working directory is important because it is where Vagrant looks for the Vagrantfile. It also defines how relative paths in the Vagrantfile are expanded, since they're expanded relative to where the Vagrantfile is found. This environmental variable is most commonly set when running Vagrant from a scripting environment in order to set the directory that Vagrant sees. `VAGRANT_DOTFILE_PATH` ----------------------- `VAGRANT_DOTFILE_PATH` can be set to change the directory where Vagrant stores VM-specific state, such as the VirtualBox VM UUID. By default, this is set to `.vagrant`. If you keep your Vagrantfile in a Dropbox folder in order to share the folder between your desktop and laptop (for example), Vagrant will overwrite the files in this directory with the details of the VM on the most recently-used host. To avoid this, you could set `VAGRANT_DOTFILE_PATH` to `.vagrant-laptop` and `.vagrant-desktop` on the respective machines. (Remember to update your `.gitignore`!) `VAGRANT_HOME` --------------- `VAGRANT_HOME` can be set to change the directory where Vagrant stores global state. By default, this is set to `~/.vagrant.d`. The Vagrant home directory is where things such as boxes are stored, so it can actually become quite large on disk. `VAGRANT_LOG` -------------- `VAGRANT_LOG` specifies the verbosity of log messages from Vagrant. By default, Vagrant does not actively show any log messages. Log messages are very useful when troubleshooting issues, reporting bugs, or getting support. At the most verbose level, Vagrant outputs basically everything it is doing. Available log levels are "debug," "info," "warn," and "error." Both "warn" and "error" are practically useless since there are very few cases of these, and Vagrant generally reports them within the normal output. "info" is a good level to start with if you are having problems, because while it is much louder than normal output, it is still very human-readable and can help identify certain issues. "debug" output is *extremely* verbose and can be difficult to read without some knowledge of Vagrant internals. It is the best output to attach to a support request or bug report, however. `VAGRANT_NO_COLOR` ------------------- If this is set to any value, then Vagrant will not use any colorized output. This is useful if you are logging the output to a file or on a system that does not support colors. The equivalent behavior can be achieved by using the `--no-color` flag on a command-by-command basis. This environmental variable is useful for setting this flag globally. `VAGRANT_FORCE_COLOR` ---------------------- If this is set to any value, then Vagrant will force colored output, even if it detected that there is no TTY or the current environment does not support it. The equivalent behavior can be achieved by using the `--color` flag on a command-by-command basis. This environmental variable is useful for setting this flag globally. `VAGRANT_NO_PLUGINS` --------------------- If this is set to any value, then Vagrant will not load any 3rd party plugins. This is useful if you install a plugin and it is introducing instability to Vagrant, or if you want a specific Vagrant environment to not load plugins. Note that any `vagrant plugin` commands automatically do not load any plugins, so if you do install any unstable plugins, you can always use the `vagrant plugin` commands without having to worry. `VAGRANT_ALLOW_PLUGIN_SOURCE_ERRORS` ------------------------------------- If this is set to any value, then Vagrant will not error when a configured plugin source is unavailable. When installing a Vagrant plugin Vagrant will error and halt if a plugin source is inaccessible. In some cases it may be desirable to ignore inaccessible sources and continue with the plugin installation. Enabling this value will cause Vagrant to simply log the plugin source error and continue. `VAGRANT_INSTALL_LOCAL_PLUGINS` -------------------------------- If this is set to any value, Vagrant will not prompt for confirmation prior to installing local plugins which have been defined within the local Vagrantfile. `VAGRANT_LOCAL_PLUGINS_LOAD` ----------------------------- If this is set Vagrant will not stub the Vagrantfile when running `vagrant plugin` commands. When this environment variable is set the `--local` flag will not be required by `vagrant plugin` commands to enable local project plugins. `VAGRANT_NO_PARALLEL` ---------------------- If this is set, Vagrant will not perform any parallel operations (such as parallel box provisioning). All operations will be performed in serial. `VAGRANT_DETECTED_OS` ---------------------- This environment variable may be set by the Vagrant launcher to help determine the current runtime platform. In general Vagrant will set this value when running on a Windows host using a cygwin or msys based shell. If this value is set, the Vagrant launcher will not modify it. `VAGRANT_DETECTED_ARCH` ------------------------ This environment variable may be set by the Vagrant launcher to help determine the current runtime architecture in use. In general Vagrant will set this value when running on a Windows host using a cygwin or msys based shell. The value the Vagrant launcher may set in this environment variable will not always match the actual architecture of the platform itself. Instead it signifies the detected architecture of the environment it is running within. If this value is set, the Vagrant launcher will not modify it. `VAGRANT_WINPTY_DISABLE` ------------------------- If this is set, Vagrant will *not* wrap interactive processes with winpty where required. `VAGRANT_PREFER_SYSTEM_BIN` ---------------------------- If this is set, Vagrant will prefer using utility executables (like `ssh` and `rsync`) from the local system instead of those vendored within the Vagrant installation. Vagrant will default to using a system provided `ssh` on Windows. This environment variable can also be used to disable that behavior to force Vagrant to use the embedded `ssh` executable by setting it to `0`. `VAGRANT_SKIP_SUBPROCESS_JAILBREAK` ------------------------------------ As of Vagrant 1.7.3, Vagrant tries to intelligently detect if it is running in the installer or running via Bundler. Although not officially supported, Vagrant tries its best to work when executed via Bundler. When Vagrant detects that you have spawned a subprocess that lives outside of Vagrant's installer, Vagrant will do its best to reset the preserved environment during the subprocess execution. If Vagrant detects it is running outside of the officially installer, the original environment will always be restored. You can disable this automatic jailbreak by setting `VAGRANT_SKIP_SUBPROCESS_JAILBREAK`. `VAGRANT_VAGRANTFILE` ---------------------- This specifies the filename of the Vagrantfile that Vagrant searches for. By default, this is "Vagrantfile". Note that this is *not* a file path, but just a filename. This environmental variable is commonly used in scripting environments where a single folder may contain multiple Vagrantfiles representing different configurations. `VAGRANT_DISABLE_VBOXSYMLINKCREATE` ------------------------------------ If set, this will disable the ability to create symlinks with all virtualbox shared folders. Defaults to true if the option is not set. This can be overridden on a per-folder basis within your Vagrantfile config by settings the `SharedFoldersEnableSymlinksCreate` option to true. `VAGRANT_ENABLE_RESOLV_REPLACE` -------------------------------- Use the Ruby Resolv library in place of the libc resolver. `VAGRANT_DISABLE_RESOLV_REPLACE` --------------------------------- Vagrant can optionally use the Ruby Resolv library in place of the libc resolver. This can be disabled setting this environment variable. `VAGRANT_POWERSHELL_VERSION_DETECTION_TIMEOUT` ----------------------------------------------- Vagrant will use a default timeout when checking for the installed version of PowerShell. Occasionally the default can be too low and Vagrant will report being unable to detect the installed version of PowerShell. This environment variable can be used to extend the timeout used during PowerShell version detection. When setting this environment variable, its value will be in seconds. By default, it will use 30 seconds as a timeout. `VAGRANT_USE_VAGRANT_TRIGGERS` ------------------------------- Vagrant will not display the warning about disabling the core trigger feature if the community plugin is installed. `VAGRANT_IGNORE_WINRM_PLUGIN` ------------------------------ Vagrant will not display warning when `vagrant-winrm` plugin is installed. `VAGRANT_USER_AGENT_PROVISIONAL_STRING` ---------------------------------------- Vagrant will append the contents of this variable to the default user agent header. `VAGRANT_IS_HYPERV_ADMIN` -------------------------- Disable Vagrant's check for Hyper-V admin privileges and allow Vagrant to assume the current user has full access to Hyper-V. This is useful if the internal privilege check incorrectly determines the current user does not have access to Hyper-V.
programming_docs
vagrant Puppet Apply Provisioner Puppet Apply Provisioner ========================= **Provisioner name: `puppet`** The Vagrant Puppet provisioner allows you to provision the guest using [Puppet](https://www.puppetlabs.com/puppet), specifically by calling `puppet apply`, without a Puppet Master. > **Warning:** If you are not familiar with Puppet and Vagrant already, I recommend starting with the [shell provisioner](shell). However, if you are comfortable with Vagrant already, Vagrant is the best way to learn Puppet. > > Options -------- This section lists the complete set of available options for the Puppet provisioner. More detailed examples of how to use the provisioner are available below this section. * [`binary_path`](#binary_path) (string) - Path on the guest to Puppet's `bin/` directory. * [`facter`](#facter) (hash) - A hash of data to set as available facter variables within the Puppet run. * [`hiera_config_path`](#hiera_config_path) (string) - Path to the Hiera configuration on the host. Read the section below on how to use Hiera with Vagrant. * [`manifest_file`](#manifest_file) (string) - The name of the manifest file that will serve as the entrypoint for the Puppet run. This manifest file is expected to exist in the configured `manifests_path` (see below). This defaults to "default.pp" * [`manifests_path`](#manifests_path) (string) - The path to the directory which contains the manifest files. This defaults to "manifests" * [`module_path`](#module_path) (string or array of strings) - Path or paths, on the host, to the directory which contains Puppet modules, if any. * [`environment`](#environment) (string) - Name of the Puppet environment. * [`environment_path`](#environment_path) (string) - Path to the directory that contains environment files on the host disk. * [`environment_variables`](#environment_variables) (hash) - A hash of string key/value pairs to be set as environment variables before the puppet apply run. * [`options`](#options-1) (array of strings) - Additionally options to pass to the Puppet executable when running Puppet. * [`synced_folder_type`](#synced_folder_type) (string) - The type of synced folders to use when sharing the data required for the provisioner to work properly. By default this will use the default synced folder type. For example, you can set this to "nfs" to use NFS synced folders. * [`synced_folder_args`](#synced_folder_args) (array) - Arguments that are passed to the folder sync. For example ['-a', '--delete', '--exclude=fixtures'] for the rsync sync command. * [`temp_dir`](#temp_dir) (string) - The directory where all the data associated with the Puppet run (manifest files, modules, etc.) will be stored on the guest machine. * [`working_directory`](#working_directory) (string) - Path in the guest that will be the working directory when Puppet is executed. This is usually only set because relative paths are used in the Hiera configuration. > If only `environment` and `environment_path` are specified, it will parse and use the manifest specified in the `environment.conf` file. If `manifests_path` and `manifest_file` is specified along with the environment options, the manifest from the environment will be overridden by the specified `manifest_file`. If `manifests_path` and `manifest_file` are specified without environments, the old non-environment mode will be used (which will fail on Puppet 4+). > > Bare Minimum ------------- The quickest way to get started with the Puppet provisioner is to just enable it: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" end ``` > `puppet` need to be installed in the guest vm. > > By default, Vagrant will configure Puppet to look for manifests in the "manifests" folder relative to the project root, and will use the "default.pp" manifest as an entry-point. This means, if your directory tree looks like the one below, you can get started with Puppet with just that one line in your Vagrantfile. ``` $ tree . |-- Vagrantfile |-- manifests |   |-- default.pp ``` Custom Manifest Settings ------------------------- Of course, you are able to put and name your manifests whatever you would like. You can override both the directory where Puppet looks for manifests with `manifests_path`, and the manifest file used as the entry-point with `manifest_file`: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" do |puppet| puppet.manifests_path = "my_manifests" puppet.manifest_file = "default.pp" end end ``` The path can be relative or absolute. If it is relative, it is relative to the project root. You can also specify a manifests path that is on the remote machine already, perhaps put in place by a shell provisioner. In this case, Vagrant will not attempt to upload the manifests directory. To specify a remote manifests path, use the following syntax: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" do |puppet| puppet.manifests_path = ["vm", "/path/to/manifests"] puppet.manifest_file = "default.pp" end end ``` It is a somewhat odd syntax, but the tuple (two-element array) says that the path is located in the "vm" at "/path/to/manifests". Environments ------------- If you are using Puppet 4 or higher, you can provision using [Puppet Environments](https://docs.puppetlabs.com/puppet/latest/reference/environments.html) by specifying the name of the environment and the path on the local disk to the environment files: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" do |puppet| puppet.environment_path = "../puppet/environments" puppet.environment = "testenv" end end ``` The default manifest is the environment's `manifests` directory. If the environment has an `environment.conf` the manifest path is parsed from there. Relative paths are assumed to be relative to the directory of the environment. If the manifest setting in `environment.conf` use the Puppet variables `$codedir` or `$environment` they are resolved to the parent directory of `environment_path` and `environment` respectively. Modules -------- Vagrant also supports provisioning with [Puppet modules](https://docs.puppetlabs.com/guides/modules.html). This is done by specifying a path to a modules folder where modules are located. The manifest file is still used as an entry-point. ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" do |puppet| puppet.module_path = "modules" end end ``` Just like the manifests path, the modules path is relative to the project root if a relative path is given. Custom Facts ------------- Custom facts to be exposed by [Facter](https://puppetlabs.com/facter) can be specified as well: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" do |puppet| puppet.facter = { "vagrant" => "1" } end end ``` Now, the `$vagrant` variable in your Puppet manifests will equal "1". Configuring Hiera ------------------ [Hiera](https://docs.puppetlabs.com/hiera/1/) configuration is also supported. `hiera_config_path` specifies the path to the Hiera configuration file stored on the host. If the `:datadir` setting in the Hiera configuration file is a relative path, `working_directory` should be used to specify the directory in the guest that path is relative to. ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" do |puppet| puppet.hiera_config_path = "hiera.yaml" puppet.working_directory = "/tmp/vagrant-puppet" end end ``` `hiera_config_path` can be relative or absolute. If it is relative, it is relative to the project root. `working_directory` is an absolute path within the guest. Additional Options ------------------- Puppet supports a lot of command-line flags. Basically any setting can be overridden on the command line. To give you the most power and flexibility possible with Puppet, Vagrant allows you to specify custom command line flags to use: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet" do |puppet| puppet.options = "--verbose --debug" end end ``` vagrant CFEngine Provisioner CFEngine Provisioner ===================== **Provisioner name: `cfengine`** The Vagrant CFEngine provisioner allows you to provision the guest using [CFEngine](https://cfengine.com/). It can set up both CFEngine policy servers and clients. You can configure both the policy server and the clients in a single [multi-machine `Vagrantfile`](../multi-machine/index). > **Warning:** If you are not familiar with CFEngine and Vagrant already, I recommend starting with the [shell provisioner](shell). However, if you are comfortable with Vagrant already, Vagrant is the best way to learn CFEngine. > > Let us look at some common examples first. See the bottom of this document for a comprehensive list of options. Setting up a CFEngine server and client ---------------------------------------- The CFEngine provisioner automatically installs the latest [CFEngine Community packages](https://cfengine.com/cfengine-linux-distros) on the VM, then configures and starts CFEngine according to your specification. Configuring a VM as a CFEngine policy server is easy: ``` Vagrant.configure("2") do |config| config.vm.provision "cfengine" do |cf| cf.am_policy_hub = true end end ``` The host will automatically be [bootstrapped](https://cfengine.com/docs/3.5/manuals-architecture-networking.html#bootstrapping) to itself to become a policy server. If you already have a working CFEngine policy server, you can get a CFEngine client installed and bootstrapped by specifying its IP address: ``` Vagrant.configure("2") do |config| config.vm.provision "cfengine" do |cf| cf.policy_server_address = "10.0.2.15" end end ``` Copying files to the VM ------------------------ If you have some policy or other files that you want to install by default on a VM, you can use the `files_path` attribute: ``` Vagrant.configure("2") do |config| config.vm.provision "cfengine" do |cf| cf.am_policy_hub = true cf.files_path = "cfengine_files" end end ``` Everything under `cfengine_files/` in the Vagrant project directory will be recursively copied under `/var/cfengine/` in the VM, on top of its default contents. A common use case is to add your own files to `/var/cfengine/masterfiles/` in the policy server. Assuming your extra files are stored under `cfengine_files/masterfiles/`, the line shown above will add them to the VM after CFEngine is installed, but before it is bootstrapped. Modes of operation ------------------- The default mode of operation is `:bootstrap`, which results in CFEngine being bootstrapped according to the information provided in the `Vagrantfile`. You can also set `mode` to `:single_run`, which will run `cf-agent` once on the host to execute the file specified in the `run_file` parameter, but will not bootstrap it, so it will not be executed periodically. The recommended mode of operation is `:bootstrap`, as you get the full benefits of CFEngine when you have it running periodically. Running a standalone file -------------------------- If you want to run a standalone file, you can specify the `run_file` parameter. The file will be copied to the VM and executed on its own using `cf-agent`. Note that the file needs to be a standalone policy, including its own [`body common control`](https://cfengine.com/docs/3.5/reference-components.html#common-control). The `run_file` parameter is mandatory if `mode` is set to `:single_run`, but can also be specified when `mode` is set to `:bootstrap` - in this case the file will be executed after the host has been bootstrapped. Full Alphabetical List of Configuration Options ------------------------------------------------ * [`am_policy_hub`](#am_policy_hub) (boolean, default `false`) determines whether the VM will be configured as a CFEngine policy hub (automatically bootstrapped to its own IP address). You can combine it with `policy_server_address` if the VM has multiple network interfaces and you want to bootstrap to a specific one. * [`extra_agent_args`](#extra_agent_args) (string, default `nil`) can be used to pass additional arguments to `cf-agent` when it is executed. For example, you could use it to pass the `-I` or `-v` options to enable additional output from the agent. * [`classes`](#classes) (array, default `nil`) can be used to define additional classes during `cf-agent` runs. These classes will be defined using the `-D` option to `cf-agent`. * [`deb_repo_file`](#deb_repo_file) (string, default `"/etc/apt/sources.list.d/cfengine-community.list"`) specifies the file in which the CFEngine repository information will be stored in Debian systems. * [`deb_repo_line`](#deb_repo_line) (string, default `"deb https://cfengine.com/pub/apt $(lsb_release -cs) main"`) specifies the repository to use for `.deb` packages. * [`files_path`](#files_path) (string, default `nil`) specifies a directory that will be copied to the VM on top of the default `/var/cfengine/` (the contents of `/var/cfengine/` will not be replaced, the files will added to it). * [`force_bootstrap`](#force_bootstrap) (boolean, default `false`) specifies whether CFEngine will be bootstrapped again even if the host has already been bootstrapped. * [`install`](#install) (boolean or `:force`, default `true`) specifies whether CFEngine will be installed on the VM if needed. If you set this parameter to `:force`, then CFEngine will be reinstalled even if it is already present on the machine. * [`mode`](#mode) (`:bootstrap` or `:single_run`, default `:bootstrap`) specifies whether CFEngine will be bootstrapped so that it executes periodically, or will be run a single time. If `mode` is set to `:single_run` you have to set `run_file`. * [`policy_server_address`](#policy_server_address) (string, no default) specifies the IP address of the policy server to which CFEngine will be bootstrapped. If `am_policy_hub` is set to `true`, this parameter defaults to the VM's IP address, but can still be set (for example, if the VM has more than one network interface). * [`repo_gpg_key_url`](#repo_gpg_key_url) (string, default `"https://cfengine.com/pub/gpg.key"`) contains the URL to obtain the GPG key used to verify the packages obtained from the repository. * [`run_file`](#run_file) (string, default `nil`) can be used to specify a file inside the Vagrant project directory that will be copied to the VM and executed once using `cf-agent`. This parameter is mandatory if `mode` is set to `:single_run`, but can also be specified when `mode` is set to `:bootstrap` - in this case the file will be executed after the host has been bootstrapped. * [`upload_path`](#upload_path) (string, default `"/tmp/vagrant-cfengine-file"`) specifies the file to which `run_file` (if specified) will be copied on the VM before being executed. * [`yum_repo_file`](#yum_repo_file) (string, default `"/etc/yum.repos.d/cfengine-community.repo"`) specifies the file in which the CFEngine repository information will be stored in RedHat systems. * [`yum_repo_url`](#yum_repo_url) (string, default `"https://cfengine.com/pub/yum/"`) specifies the URL of the repository to use for `.rpm` packages. * [`package_name`](#package_name) (string, default `"cfengine-community"`) specifies the name of the package used to install CFEngine. vagrant Ansible Provisioner Ansible Provisioner ==================== **Provisioner name: `ansible`** The Vagrant Ansible provisioner allows you to provision the guest using [Ansible](http://ansible.com) playbooks by executing **`ansible-playbook` from the Vagrant host**. > **Warning:** If you are not familiar with Ansible and Vagrant already, I recommend starting with the [shell provisioner](shell). However, if you are comfortable with Vagrant already, Vagrant is a great way to learn Ansible. > > Setup Requirements ------------------- * **[Install Ansible](https://docs.ansible.com/intro_installation.html#installing-the-control-machine) on your Vagrant host**. * Your Vagrant host should ideally provide a recent version of OpenSSH that [supports ControlPersist](https://docs.ansible.com/faq.html#how-do-i-get-ansible-to-reuse-connections-enable-kerberized-ssh-or-have-ansible-pay-attention-to-my-local-ssh-config-file). If installing Ansible directly on the Vagrant host is not an option in your development environment, you might be looking for the [Ansible Local provisioner](ansible_local) alternative. Usage ------ This page only documents the specific parts of the `ansible` (remote) provisioner. General Ansible concepts like Playbook or Inventory are shortly explained in the [introduction to Ansible and Vagrant](ansible_intro). ### Simplest Configuration To run Ansible against your Vagrant guest, the basic `Vagrantfile` configuration looks like: ``` Vagrant.configure("2") do |config| # # Run Ansible from the Vagrant Host # config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" end end ``` Options -------- This section lists the *specific* options for the Ansible (remote) provisioner. In addition to the options listed below, this provisioner supports the [**common options** for both Ansible provisioners](ansible_common). * [`ask_become_pass`](#ask_become_pass) (boolean) - require Ansible to [prompt for a password](https://docs.ansible.com/intro_getting_started.html#remote-connection-information) when switching to another user with the [become/sudo mechanism](http://docs.ansible.com/ansible/become.html). The default value is `false`. * [`ask_sudo_pass`](#ask_sudo_pass) (boolean) - Backwards compatible alias for the [ask\_become\_pass](#ask_become_pass) option. > **Deprecation:** The `ask_sudo_pass` option is deprecated and will be removed in a future release. Please use the [**`ask_become_pass`**](#ask_become_pass) option instead. > > * [`ask_vault_pass`](#ask_vault_pass) (boolean) - require Ansible to [prompt for a vault password](https://docs.ansible.com/playbooks_vault.html#vault). The default value is `false`. * [`force_remote_user`](#force_remote_user) (boolean) - require Vagrant to set the `ansible_ssh_user` setting in the generated inventory, or as an extra variable when a static inventory is used. All the Ansible `remote_user` parameters will then be overridden by the value of `config.ssh.username` of the [Vagrant SSH Settings](../vagrantfile/ssh_settings). If this option is set to `false` Vagrant will set the Vagrant SSH username as a default Ansible remote user, but `remote_user` parameters of your Ansible plays or tasks will still be taken into account and thus override the Vagrant configuration. The default value is `true`. > **Compatibility Note:** This option was introduced in Vagrant 1.8.0. Previous Vagrant versions behave like if this option was set to `false`. > > * [`host_key_checking`](#host_key_checking) (boolean) - require Ansible to [enable SSH host key checking](https://docs.ansible.com/intro_getting_started.html#host-key-checking). The default value is `false`. * [`raw_ssh_args`](#raw_ssh_args) (array of strings) - require Ansible to apply a list of OpenSSH client options. Example: `['-o ControlMaster=no']`. It is an *unsafe wildcard* that can be used to pass additional SSH settings to Ansible via `ANSIBLE_SSH_ARGS` environment variable, overriding any other SSH arguments (e.g. defined in an [`ansible.cfg` configuration file](https://docs.ansible.com/intro_configuration.html#ssh-args)). Tips and Tricks ---------------- ### Ansible Parallel Execution Vagrant is designed to provision [multi-machine environments](../multi-machine) in sequence, but the following configuration pattern can be used to take advantage of Ansible parallelism: ``` # Vagrant 1.7+ automatically inserts a different # insecure keypair for each new VM created. The easiest way # to use the same keypair for all the machines is to disable # this feature and rely on the legacy insecure key. # config.ssh.insert_key = false # # Note: # As of Vagrant 1.7.3, it is no longer necessary to disable # the keypair creation when using the auto-generated inventory. N = 3 (1..N).each do |machine_id| config.vm.define "machine#{machine_id}" do |machine| machine.vm.hostname = "machine#{machine_id}" machine.vm.network "private_network", ip: "192.168.77.#{20+machine_id}" # Only execute once the Ansible provisioner, # when all the machines are up and ready. if machine_id == N machine.vm.provision :ansible do |ansible| # Disable default limit to connect to all the machines ansible.limit = "all" ansible.playbook = "playbook.yml" end end end end ``` > **Tip:** If you apply this parallel provisioning pattern with a static Ansible inventory, you will have to organize the things so that [all the relevant private keys are provided to the `ansible-playbook` command](https://github.com/hashicorp/vagrant/pull/5765#issuecomment-120247738). The same kind of considerations applies if you are using multiple private keys for a same machine (see [`config.ssh.private_key_path` SSH setting](../vagrantfile/ssh_settings)). > > ### Force Paramiko Connection Mode The Ansible provisioner is implemented with native OpenSSH support in mind, and there is no official support for [paramiko](https://github.com/paramiko/paramiko/) (A native Python SSHv2 protocol library). If you really need to use this connection mode though, it is possible to enable paramiko as illustrated in the following configuration examples: With auto-generated inventory: ``` ansible.raw_arguments = ["--connection=paramiko"] ``` With a custom inventory, the private key must be specified (e.g. via an `ansible.cfg` configuration file, `--private-key` argument, or as part of your inventory file): ``` ansible.inventory_path = "./my-inventory" ansible.raw_arguments = [ "--connection=paramiko", "--private-key=/home/.../.vagrant/machines/.../private_key" ] ```
programming_docs
vagrant Provisioning Provisioning ============= Provisioners in Vagrant allow you to automatically install software, alter configurations, and more on the machine as part of the `vagrant up` process. This is useful since [boxes](../boxes) typically are not built *perfectly* for your use case. Of course, if you want to just use `vagrant ssh` and install the software by hand, that works. But by using the provisioning systems built-in to Vagrant, it automates the process so that it is repeatable. Most importantly, it requires no human interaction, so you can `vagrant destroy` and `vagrant up` and have a fully ready-to-go work environment with a single command. Powerful. Vagrant gives you multiple options for provisioning the machine, from simple shell scripts to more complex, industry-standard configuration management systems. If you've never used a configuration management system before, it is recommended you start with basic [shell scripts](shell) for provisioning. You can find the full list of built-in provisioners and usage of these provisioners in the navigational area to the left. When Provisioning Happens -------------------------- Provisioning happens at certain points during the lifetime of your Vagrant environment: * On the first `vagrant up` that creates the environment, provisioning is run. If the environment was already created and the up is just resuming a machine or booting it up, they will not run unless the `--provision` flag is explicitly provided. * When `vagrant provision` is used on a running environment. * When `vagrant reload --provision` is called. The `--provision` flag must be present to force provisioning. You can also bring up your environment and explicitly *not* run provisioners by specifying `--no-provision`. vagrant Shared Chef Options Shared Chef Options ==================== All Chef Provisioners ---------------------- The following options are available to all Vagrant Chef provisioners. Many of these options are for advanced users only and should not be used unless you understand their purpose. * [`binary_path`](#binary_path) (string) - The path to Chef's `bin/` directory on the guest machine. * [`binary_env`](#binary_env) (string) - Arbitrary environment variables to set before running the Chef provisioner command. This should be of the format `KEY=value` as a string. * [`install`](#install) (boolean, string) - Install Chef on the system if it does not exist. The default value is "true", which will use the official Omnibus installer from Chef. This is a trinary attribute (it can have three values): + [`true`](#true) (boolean) - install Chef + [`false`](#false) (boolean) - do not install Chef + [`"force"`](#quot-force-quot-) (string) - install Chef, even if it is already installed at the proper version on the guest * [`installer_download_path`](#installer_download_path) (string) - The path where the Chef installer will be downloaded to. This option is only honored if the `install` attribute is `true` or `"force"`. The default value is to use the path provided by Chef's Omnibus installer, which varies between releases. This value has no effect on Windows because Chef's omnibus installer lacks the option on Windows. * [`log_level`](#log_level) (string) - The Chef log level. See the Chef docs for acceptable values. * [`product`](#product) (string) - The name of the Chef product to install. The default value is "chef", which corresponds to the Chef Client. You can also specify "chefdk", which will install the Chef Development Kit. At the time of this writing, the ChefDK is only available through the "current" channel, so you will need to update that value as well. * [`channel`](#channel) (string) - The release channel from which to pull the Chef Client or the Chef Development Kit. The default value is `"stable"` which will pull the latest stable version of the Chef Client. For newer versions, or if you wish to install the Chef Development Kit, you may need to change the channel to "current". Because Chef Software floats the versions that are contained in the channel, they may change and Vagrant is unable to detect this. * [`version`](#version) (string) - The version of Chef to install on the guest. If Chef is already installed on the system, the installed version is compared with the requested version. If they match, no action is taken. If they do not match, the value specified in this attribute will be installed in favor of the existing version (a message will be displayed). You can also specify "latest" (default), which will install the latest version of Chef on the system. In this case, Chef will use whatever version is on the system. To force the newest version of Chef to be installed on every provision, set the [`install`](#install) option to "force". * [`omnibus_url`](#omnibus_url) (string) - Location of Omnibus installation scripts. This URL specifies the location of install.sh/install.ps1 for Linux/Unix and Windows respectively. It defaults to <https://omnitruck.chef.io>. The full URL is in this case: + Linux/Unix: <https://omnitruck.chef.io/install.sh> + Windows: <https://omnitruck.chef.io/install.ps1> If you want to have <https://example.com/install.sh> as Omnibus script for your Linux/Unix installations, you should set this option to <https://example.com> Runner Chef Provisioners ------------------------- The following options are available to any of the Chef "runner" provisioners which include [Chef Solo](chef_solo), [Chef Zero](chef_zero), and [Chef Client](chef_client). * [`arguments`](#arguments) (string) - A list of additional arguments to pass on the command-line to Chef. Since these are passed in a shell-like environment, be sure to properly quote and escape characters if necessary. By default, no additional arguments are sent. * [`attempts`](#attempts) (int) - The number of times Chef will be run if an error occurs. This defaults to 1. This can be increased to a higher number if your Chef runs take multiple runs to reach convergence. * [`custom_config_path`](#custom_config_path) (string) - A path to a custom Chef configuration local on your machine that will be used as the Chef configuration. This Chef configuration will be loaded *after* the Chef configuration that Vagrant generates, allowing you to override anything that Vagrant does. This is also a great way to use new Chef features that may not be supported fully by Vagrant's abstractions yet. * [`encrypted_data_bag_secret_key_path`](#encrypted_data_bag_secret_key_path) (string) - The path to the secret key file to decrypt encrypted data bags. By default, this is not set. * [`environment`](#environment) (string) - The environment you want the Chef run to be a part of. * [`formatter`](#formatter) (string) - The formatter to use for output from Chef. * [`http_proxy`](#http_proxy), `http_proxy_user`, `http_proxy_pass`, `no_proxy` (string) - Settings to configure HTTP and HTTPS proxies to use from Chef. These settings are also available with `http` replaced with `https` to configure HTTPS proxies. * [`json`](#json) (hash) - Custom node attributes to pass into the Chef run. * [`log_level`](#log_level-1) (string) - The log level for Chef output. This defaults to "info". * [`node_name`](#node_name) (string) - The node name for the Chef Client. By default this will be your hostname. * [`provisioning_path`](#provisioning_path) (string) - The path on the remote machine where Vagrant will store all necessary files for provisioning such as cookbooks, configurations, etc. This path must be world writable. By default this is `/tmp/vagrant-chef-#` where "#" is replaced by a unique counter. * [`run_list`](#run_list) (array) - The run list that will be executed on the node. * [`file_cache_path`](#file_cache_path) and `file_backup_path` (string) - Paths on the remote machine where files will be cached and backed up. It is useful sometimes to configure this to a synced folder address so that this can be shared across many Vagrant runs. * [`verbose_logging`](#verbose_logging) (boolean) - Whether or not to enable the Chef `verbose_logging` option. By default this is false. * [`enable_reporting`](#enable_reporting) (boolean) - Whether or not to enable the Chef `enable_reporting` option. By default this is true. vagrant Chef Zero Provisioner Chef Zero Provisioner ====================== **Provisioner name: `chef_zero`** The Vagrant Chef Zero provisioner allows you to provision the guest using [Chef](https://www.getchef.com/chef/), specifically with [Chef Zero/local mode](https://docs.getchef.com/ctl_chef_client.html#run-in-local-mode). This new provisioner is a middle ground between running a full blown Chef Server and using the limited [Chef Solo](chef_solo) provisioner. It runs a local in-memory Chef Server and fakes the validation and client key registration. > **Warning:** If you are not familiar with Chef and Vagrant already, I recommend starting with the [shell provisioner](shell). However, if you are comfortable with Vagrant already, Vagrant is the best way to learn Chef. > > Options -------- This section lists the complete set of available options for the Chef Zero provisioner. More detailed examples of how to use the provisioner are available below this section. * [`cookbooks_path`](#cookbooks_path) (string or array) - A list of paths to where cookbooks are stored. By default this is "cookbooks", expecting a cookbooks folder relative to the Vagrantfile location. * [`data_bags_path`](#data_bags_path) (string or array) - A path where data bags are stored. By default, no data bag path is set. Chef 12 or higher is required to use the array option. Chef 11 and lower only accept a string value. * [`environments_path`](#environments_path) (string) - A path where environment definitions are located. By default, no environments folder is set. * [`nodes_path`](#nodes_path) (string or array) - A list of paths where node objects (in JSON format) are stored. By default, no nodes path is set. This value is required. * [`environment`](#environment) (string) - The environment you want the Chef run to be a part of. This requires Chef 11.6.0 or later, and that `environments_path` is set. * [`roles_path`](#roles_path) (string or array) - A list of paths where roles are defined. By default this is empty. Multiple role directories are only supported by Chef 11.8.0 and later. * [`synced_folder_type`](#synced_folder_type) (string) - The type of synced folders to use when sharing the data required for the provisioner to work properly. By default this will use the default synced folder type. For example, you can set this to "nfs" to use NFS synced folders. In addition to all the options listed above, the Chef Zero provisioner supports the [common options for all Chef provisioners](chef_common). Usage ------ The Chef Zero provisioner is configured basically the same way as the Chef Solo provisioner. See the [Chef Solo documentations](chef_solo) for more information. A basic example could look like this: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_zero" do |chef| # Specify the local paths where Chef data is stored chef.cookbooks_path = "cookbooks" chef.data_bags_path = "data_bags" chef.nodes_path = "nodes" chef.roles_path = "roles" # Add a recipe chef.add_recipe "apache" # Or maybe a role chef.add_role "web" end end ``` vagrant Shell Provisioner Shell Provisioner ================== **Provisioner name: `"shell"`** The Vagrant Shell provisioner allows you to upload and execute a script within the guest machine. Shell provisioning is ideal for users new to Vagrant who want to get up and running quickly and provides a strong alternative for users who are not comfortable with a full configuration management system such as Chef or Puppet. For POSIX-like machines, the shell provisioner executes scripts with SSH. For Windows guest machines that are configured to use WinRM, the shell provisioner executes PowerShell and Batch scripts over WinRM. Options -------- The shell provisioner takes various options. One of `inline` or `path` is required: * [`inline`](#inline) (string) - Specifies a shell command inline to execute on the remote machine. See the [inline scripts](#inline-scripts) section below for more information. * [`path`](#path) (string) - Path to a shell script to upload and execute. It can be a script relative to the project Vagrantfile or a remote script (like a [gist](https://gist.github.com)). The remainder of the available options are optional: * [`args`](#args) (string or array) - Arguments to pass to the shell script when executing it as a single string. These arguments must be written as if they were typed directly on the command line, so be sure to escape characters, quote, etc. as needed. You may also pass the arguments in using an array. In this case, Vagrant will handle quoting for you. * [`env`](#env) (hash) - List of key-value pairs to pass in as environment variables to the script. Vagrant will handle quoting for environment variable values, but the keys remain untouched. * [`binary`](#binary) (boolean) - Vagrant automatically replaces Windows line endings with Unix line endings. If this is false, then Vagrant will not do this. By default this is "false". If the shell provisioner is communicating over WinRM, this defaults to "true". * [`privileged`](#privileged) (boolean) - Specifies whether to execute the shell script as a privileged user or not (`sudo`). By default this is "true". Windows guests use a scheduled task to run as a true administrator without the WinRM limitations. * [`upload_path`](#upload_path) (string) - Is the remote path where the shell script will be uploaded to. The script is uploaded as the SSH user over SCP, so this location must be writable to that user. By default this is "/tmp/vagrant-shell". On Windows, this will default to "C:\tmp\vagrant-shell". * [`keep_color`](#keep_color) (boolean) - Vagrant automatically colors output in green and red depending on whether the output is from stdout or stderr. If this is true, Vagrant will not do this, allowing the native colors from the script to be outputted. * [`name`](#name) (string) - This value will be displayed in the output so that identification by the user is easier when many shell provisioners are present. * [`powershell_args`](#powershell_args) (string) - Extra arguments to pass to `PowerShell` if you are provisioning with PowerShell on Windows. * [`powershell_elevated_interactive`](#powershell_elevated_interactive) (boolean) - Run an elevated script in interactive mode on Windows. By default this is "false". Must also be `privileged`. Be sure to enable auto-login for Windows as the user must be logged in for interactive mode to work. * [`md5`](#md5) (string) - MD5 checksum used to validate remotely downloaded shell files. * [`sha1`](#sha1) (string) - SHA1 checksum used to validate remotely downloaded shell files. * [`sensitive`](#sensitive) (boolean) - Marks the Hash values used in the `env` option as sensitive and hides them from output. By default this is "false". Inline Scripts --------------- Perhaps the easiest way to get started is with an inline script. An inline script is a script that is given to Vagrant directly within the Vagrantfile. An example is best: ``` Vagrant.configure("2") do |config| config.vm.provision "shell", inline: "echo Hello, World" end ``` This causes `echo Hello, World` to be run within the guest machine when provisioners are run. Combined with a little bit more Ruby, this makes it very easy to embed your shell scripts directly within your Vagrantfile. Another example below: ``` $script = <<-SCRIPT echo I am provisioning... date > /etc/vagrant_provisioned_at SCRIPT Vagrant.configure("2") do |config| config.vm.provision "shell", inline: $script end ``` I understand that if you are not familiar with Ruby, the above may seem very advanced or foreign. But do not fear, what it is doing is quite simple: the script is assigned to a global variable `$script`. This global variable contains a string which is then passed in as the inline script to the Vagrant configuration. Of course, if any Ruby in your Vagrantfile outside of basic variable assignment makes you uncomfortable, you can use an actual script file, documented in the next section. For Windows guest machines, the inline script *must* be PowerShell. Batch scripts are not allowed as inline scripts. External Script ---------------- The shell provisioner can also take an option specifying a path to a shell script on the host machine. Vagrant will then upload this script into the guest and execute it. An example: ``` Vagrant.configure("2") do |config| config.vm.provision "shell", path: "script.sh" end ``` Relative paths, such as above, are expanded relative to the location of the root Vagrantfile for your project. Absolute paths can also be used, as well as shortcuts such as `~` (home directory) and `..` (parent directory). If you use a remote script as part of your provisioning process, you can pass in its URL as the `path` argument as well: ``` Vagrant.configure("2") do |config| config.vm.provision "shell", path: "https://example.com/provisioner.sh" end ``` If you are running a Batch or PowerShell script for Windows, make sure that the external path has the proper extension (".bat" or ".ps1"), because Windows uses this to determine what kind of file it is to execute. If you exclude this extension, it likely will not work. To run a script already available on the guest you can use an inline script to invoke the remote script on the guest. ``` Vagrant.configure("2") do |config| config.vm.provision "shell", inline: "/bin/sh /path/to/the/script/already/on/the/guest.sh" end ``` Script Arguments ----------------- You can parameterize your scripts as well like any normal shell script. These arguments can be specified to the shell provisioner. They should be specified as a string as they'd be typed on the command line, so be sure to properly escape anything: ``` Vagrant.configure("2") do |config| config.vm.provision "shell" do |s| s.inline = "echo $1" s.args = "'hello, world!'" end end ``` You can also specify arguments as an array if you do not want to worry about quoting: ``` Vagrant.configure("2") do |config| config.vm.provision "shell" do |s| s.inline = "echo $1" s.args = ["hello, world!"] end end ``` vagrant Salt Provisioner Salt Provisioner ================= **Provisioner name: `salt`** The Vagrant Salt provisioner allows you to provision the guest using [Salt](http://saltstack.com/) states. Salt states are [YAML](https://en.wikipedia.org/wiki/YAML) documents that describes the current state a machine should be in, e.g. what packages should be installed, which services are running, and the contents of arbitrary files. *NOTE: The Salt provisioner is builtin to Vagrant. If the `vagrant-salt` plugin is installed, it should be uninstalled to ensure expected behavior.* Masterless Quickstart ---------------------- What follows is a basic Vagrantfile that will get salt working on a single minion, without a master: ``` Vagrant.configure("2") do |config| ## Choose your base box config.vm.box = "precise64" ## For masterless, mount your salt file root config.vm.synced_folder "salt/roots/", "/srv/salt/" ## Use all the defaults: config.vm.provision :salt do |salt| salt.masterless = true salt.minion_config = "salt/minion" salt.run_highstate = true end end ``` This sets up a shared folder for the salt root, and copies the minion file over, then runs `state.highstate` on the machine. Your minion file must contain the line `file_client: local` in order to work in a masterless setup. Install Options ---------------- * [`install_master`](#install_master) (boolean) - Should vagrant install the salt-master on this machine. Not supported on Windows guest machines. * [`no_minion`](#no_minion) (boolean) - Do not install the minion, default `false`. Not supported on Windows guest machines. * [`install_syndic`](#install_syndic) (boolean) - Install the salt-syndic, default `false`. Not supported on Windows guest machines. * [`install_type`](#install_type) (stable | git | daily | testing) - Whether to install from a distribution's stable package manager, git tree-ish, daily ppa, or testing repository. Not supported on Windows guest machines. * [`install_args`](#install_args) (string, default: "develop") - When performing a git install, you can specify a branch, tag, or any treeish. Not supported on Windows. * [`always_install`](#always_install) (boolean) - Installs salt binaries even if they are already detected, default `false` * [`bootstrap_script`](#bootstrap_script) (string) - Path to your customized salt-bootstrap.sh script. Not supported on Windows guest machines. * [`bootstrap_options`](#bootstrap_options) (string) - Additional command-line options to pass to the bootstrap script. * [`version`](#version) (string, default: "2017.7.1") - Version of minion to be installed. * [`python_version`](#python_version) (string, default: "2") - Major Python version of minion to be installed. Only valid for minion versions >= 2017.7.0. Only supported on Windows guest machines. Minion Options --------------- These only make sense when `no_minion` is `false`. * [`minion_config`](#minion_config) (string, default: "salt/minion") - Path to a custom salt minion config file. * [`minion_key`](#minion_key) (string, default: "salt/key/minion.key") - Path to your minion key * [`minion_id`](#minion_id) (string) - Unique identifier for minion. Used for masterless and preseeding keys. * [`minion_pub`](#minion_pub) (string, default: "salt/key/minion.pub") - Path to your minion public key * [`grains_config`](#grains_config) (string) - Path to a custom salt grains file. On Windows, the minion needs `ipc_mode: tcp` set otherwise it will [fail to communicate](https://github.com/saltstack/salt/issues/22796) with the master. * [`masterless`](#masterless) (boolean) - Calls state.highstate in local mode. Uses `minion_id` and `pillar_data` when provided. * [`minion_json_config`](#minion_json_config) (string) - Valid json for configuring the salt minion (`-j` in bootstrap-salt.sh). Not supported on Windows. * [`salt_call_args`](#salt_call_args) (array) - An array of additional command line flag arguments to be passed to the `salt-call` command when provisioning with masterless. Master Options --------------- These only make sense when `install_master` is `true`. Not supported on Windows guest machines. * [`master_config`](#master_config) (string, default: "salt/master") Path to a custom salt master config file. * [`master_key`](#master_key) (string, default: "salt/key/master.pem") - Path to your master key. * [`master_pub`](#master_pub) (string, default: "salt/key/master.pub") - Path to your master public key. * [`seed_master`](#seed_master) (dictionary) - Upload keys to master, thereby pre-seeding it before use. Example: `{minion_name:/path/to/key.pub}` * [`master_json_config`](#master_json_config) (string) - Valid json for configuring the salt master (`-J` in bootstrap-salt.sh). Not supported on Windows. * [`salt_args`](#salt_args) (array) - An array of additional command line flag arguments to be passed to the `salt` command when provisioning with masterless. Execute States --------------- Either of the following may be used to actually execute states during provisioning. * [`run_highstate`](#run_highstate) - (boolean) Executes `state.highstate` on vagrant up. Can be applied to any machine. Execute Runners ---------------- Either of the following may be used to actually execute runners during provisioning. * [`run_overstate`](#run_overstate) - (boolean) Executes `state.over` on vagrant up. Can be applied to the master only. This is superseded by orchestrate. Not supported on Windows guest machines. * [`orchestrations`](#orchestrations) - (array of strings) Executes `state.orchestrate` on vagrant up. Can be applied to the master only. This is superseded by run\_overstate. Not supported on Windows guest machines. Output Control --------------- These may be used to control the output of state execution: * [`colorize`](#colorize) (boolean) - If true, output is colorized. Defaults to false. * [`log_level`](#log_level) (string) - The verbosity of the outputs. Defaults to "debug". Can be one of "all", "garbage", "trace", "debug", "info", or "warning". Requires `verbose` to be set to "true". * [`verbose`](#verbose) (boolean) - The verbosity of the outputs. Defaults to "false". Must be true for log\_level taking effect and the output of the salt-commands being displayed. Pillar Data ------------ You can export pillar data for use during provisioning by using the `pillar` command. Each call will merge the data so you can safely call it multiple times. The data passed in should only be hashes and lists. Here is an example:: ``` config.vm.provision :salt do |salt| # Export hostnames for webserver config salt.pillar({ "hostnames" => { "www" => "www.example.com", "intranet" => "intranet.example.com" } }) # Export database credentials salt.pillar({ "database" => { "user" => "jdoe", "password" => "topsecret" } }) salt.run_highstate = true end ``` On Windows guests, this requires PowerShell 3.0 or higher. Preseeding Keys ---------------- Preseeding keys is the recommended way to handle provisioning using a master. On a machine with salt installed, run `salt-key --gen-keys=[minion_id]` to generate the necessary .pub and .pem files For an example of a more advanced setup, look at the original [plugin](https://github.com/saltstack/salty-vagrant/tree/develop/example).
programming_docs
vagrant Chef Solo Provisioner Chef Solo Provisioner ====================== **Provisioner name: `chef_solo`** The Vagrant Chef Solo provisioner allows you to provision the guest using [Chef](https://www.chef.io/chef/), specifically with [Chef Solo](https://docs.chef.io/chef_solo.html). Chef Solo is ideal for people who are already experienced with Chef, already have Chef cookbooks, or are looking to learn Chef. Specifically, this documentation page will not go into how to use Chef or how to write Chef cookbooks, since Chef is a complete system that is beyond the scope of a single page of documentation. > **Warning:** If you are not familiar with Chef and Vagrant already, I recommend starting with the [shell provisioner](shell). However, if you are comfortable with Vagrant already, Vagrant is the best way to learn Chef. > > Options -------- This section lists the complete set of available options for the Chef Solo provisioner. More detailed examples of how to use the provisioner are available below this section. * [`cookbooks_path`](#cookbooks_path) (string or array) - A list of paths to where cookbooks are stored. By default this is "cookbooks", expecting a cookbooks folder relative to the Vagrantfile location. * [`data_bags_path`](#data_bags_path) (string or array) - A path where data bags are stored. By default, no data bag path is set. Chef 12 or higher is required to use the array option. Chef 11 and lower only accept a string value. * [`environments_path`](#environments_path) (string) - A path where environment definitions are located. By default, no environments folder is set. * [`nodes_path`](#nodes_path) (string or array) - A list of paths where node objects (in JSON format) are stored. By default, no nodes path is set. * [`environment`](#environment) (string) - The environment you want the Chef run to be a part of. This requires Chef 11.6.0 or later, and that `environments_path` is set. * [`recipe_url`](#recipe_url) (string) - URL to an archive of cookbooks that Chef will download and use. * [`roles_path`](#roles_path) (string or array) - A list of paths where roles are defined. By default this is empty. Multiple role directories are only supported by Chef 11.8.0 and later. * [`synced_folder_type`](#synced_folder_type) (string) - The type of synced folders to use when sharing the data required for the provisioner to work properly. By default this will use the default synced folder type. For example, you can set this to "nfs" to use NFS synced folders. In addition to all the options listed above, the Chef Solo provisioner supports the [common options for all Chef provisioners](chef_common). Specifying a Run List ---------------------- The easiest way to get started with the Chef Solo provisioner is to just specify a [run list](https://docs.chef.io/nodes.html#about-run-lists). This looks like: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_solo" do |chef| chef.add_recipe "apache" end end ``` This causes Vagrant to run Chef Solo with the "apache" cookbook. The cookbooks by default are looked for in the "cookbooks" directory relative to your project root. The directory structure ends up looking like this: ``` $ tree . |-- Vagrantfile |-- cookbooks |   |-- apache |   |-- recipes |   |-- default.rb ``` The order of the calls to `add_recipe` will specify the order of the run list. Earlier recipes added with `add_recipe` are run before later recipes added. Custom Cookbooks Path ---------------------- Instead of using the default "cookbooks" directory, a custom cookbooks path can also be set via the `cookbooks_path` configuration directive: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_solo" do |chef| chef.cookbooks_path = "my_cookbooks" end end ``` The path can be relative or absolute. If it is relative, it is relative to the project root. The configuration value can also be an array of paths: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_solo" do |chef| chef.cookbooks_path = ["cookbooks", "my_cookbooks"] end end ``` Roles ------ Vagrant also supports provisioning with [Chef roles](https://docs.chef.io/roles.html). This is done by specifying a path to a roles folder where roles are defined and by adding roles to your run list: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_solo" do |chef| chef.roles_path = "roles" chef.add_role("web") end end ``` Just like the cookbooks path, the roles path is relative to the project root if a relative path is given. The configuration value can also be an array of paths on Chef 11.8.0 and newer. On older Chef versions only the first path is used. **Note:** The name of the role file must be the same as the role name. For example the `web` role must be in the `roles_path` as web.json or web.rb. This is required by Chef itself, and is not a limitation imposed by Vagrant. Data Bags ---------- [Data bags](https://docs.chef.io/data_bags.html) are also supported by the Chef Solo provisioner. This is done by specifying a path to your data bags directory: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_solo" do |chef| chef.data_bags_path = "data_bags" end end ``` Custom JSON Data ----------------- Additional configuration data for Chef attributes can be passed in to Chef Solo. This is done by setting the `json` property with a Ruby hash (dictionary-like object), which is converted to JSON and passed in to Chef: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_solo" do |chef| # ... chef.json = { "apache" => { "listen_address" => "0.0.0.0" } } end end ``` Hashes, arrays, etc. can be used with the JSON configuration object. Basically, anything that can be turned cleanly into JSON works. Custom Node Name ----------------- You can specify a custom node name by setting the `node_name` property. This is useful for cookbooks that may depend on this being set to some sort of value. Example: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_solo" do |chef| chef.node_name = "foo" end end ``` vagrant Chef Apply Provisioner Chef Apply Provisioner ======================= **Provisioner name: `chef_apply`** The Vagrant Chef Apply provisioner allows you to provision the guest using [Chef](https://www.getchef.com/), specifically with [Chef Apply](https://docs.getchef.com/ctl_chef_apply.html). Chef Apply is ideal for people who are already experienced with Chef and the Chef ecosystem. Specifically, this documentation page does not cover how use Chef or how to write Chef recipes. > **Warning:** If you are not familiar with Chef and Vagrant already, we recommend starting with the [shell provisioner](shell). > > Options -------- This section lists the complete set of available options for the Chef Apply provisioner. More detailed examples of how to use the provisioner are available below this section. * [`recipe`](#recipe) (string) - The raw recipe contents to execute using Chef Apply on the guest. * [`log_level`](#log_level) (string) - The log level to use while executing `chef-apply`. The default value is "info". * [`upload_path`](#upload_path) (string) - **Advanced!** The location on the guest where the generated recipe file should be stored. For most use cases, it is unlikely you will need to customize this value. The default value is `/tmp/vagrant-chef-apply-#` where `#` is a unique counter generated by Vagrant to prevent collisions. In addition to all the options listed above, the Chef Apply provisioner supports the [common options for all Chef provisioners](chef_common). Specifying a Recipe -------------------- The easiest way to get started with the Chef Apply provisioner is to just specify an inline [Chef recipe](https://docs.chef.io/recipes.html). For example: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_apply" do |chef| chef.recipe = "package[apache2]" end end ``` This causes Vagrant to run Chef Apply with the given recipe contents. If you are familiar with Chef, you know this will install the apache2 package from the system package provider. Since single-line Chef recipes are rare, you can also specify the recipe using a "heredoc": ``` Vagrant.configure("2") do |config| config.vm.provision "chef_apply" do |chef| chef.recipe = <<-RECIPE package "apache2" template "/etc/apache2/my.config" do # ... end RECIPE end end ``` Finally, if you would prefer to store the recipe as plain-text, you can set the recipe to the contents of a file: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_apply" do |chef| chef.recipe = File.read("/path/to/my/recipe.rb") end end ``` Roles ------ The Vagrant Chef Apply provisioner does not support roles. Please use a different Vagrant Chef provisioner if you need support for roles. Data Bags ---------- The Vagrant Chef Apply provisioner does not support data\_bags. Please use a different Vagrant Chef provisioner if you need support for data\_bags. vagrant Docker Provisioner Docker Provisioner =================== **Provisioner name: `"docker"`** The Vagrant Docker provisioner can automatically install [Docker](https://www.docker.io), pull Docker containers, and configure certain containers to run on boot. The docker provisioner is ideal for organizations that are using Docker as a means to distribute things like their application or services. Or, if you are just getting started with Docker, the Docker provisioner provides the easiest possible way to begin using Docker since the provisioner automates installing Docker for you. As with all provisioners, the Docker provisioner can be used along with all the other provisioners Vagrant has in order to setup your working environment the best way possible. For example, perhaps you use Puppet to install services like databases or web servers but use Docker to house your application runtime. You can use the Puppet provisioner along with the Docker provisioner. > **Note:** This documentation is for the Docker *provisioner*. If you are looking for the Docker *provider*, visit the [Docker provider documentation](../docker/index). > > Options -------- The docker provisioner takes various options. None are required. If no options are required, the Docker provisioner will only install Docker for you (if it is not already installed). * [`images`](#images) (array) - A list of images to pull using `docker pull`. You can also use the `pull_images` function. See the example below this section for more information. In addition to the options that can be set, various functions are available and can be called to configure other aspects of the Docker provisioner. Most of these functions have examples in more detailed sections below. * [`build_image`](#build_image) - Build an image from a Dockerfile. * [`pull_images`](#pull_images) - Pull the given images. This does not start these images. * [`post_install_provisioner`](#post_install_provisioner) - A [provisioner block](../provisioning) that runs post docker installation. * [`run`](#run) - Run a container and configure it to start on boot. This can only be specified once. Building Images ---------------- The provisioner can automatically build images. Images are built prior to any configured containers to run, so you can build an image before running it. Building an image is easy: ``` Vagrant.configure("2") do |config| config.vm.provision "docker" do |d| d.build_image "/vagrant/app" end end ``` The argument to build an image is the path to give to `docker build`. This must be a path that exists within the guest machine. If you need to get data to the guest machine, use a synced folder. The `build_image` function accepts options as a second parameter. Here are the available options: * [`args`](#args) (string) - Additional arguments to pass to `docker build`. Use this to pass in things like `-t "foo"` to tag the image. Pulling Images --------------- The docker provisioner can automatically pull images from the Docker registry for you. There are two ways to specify images to pull. The first is as an array using `images`: ``` Vagrant.configure("2") do |config| config.vm.provision "docker", images: ["ubuntu"] end ``` This will cause Vagrant to pull the "ubuntu" image from the registry for you automatically. The second way to pull images is to use the `pull_images` function. Each call to `pull_images` will *append* the images to be pulled. The `images` variable, on the other hand, can only be used once. Additionally, the `pull_images` function cannot be used with the simple configuration method for provisioners (specifying it all in one line). ``` Vagrant.configure("2") do |config| config.vm.provision "docker" do |d| d.pull_images "ubuntu" d.pull_images "vagrant" end end ``` Running Containers ------------------- In addition to pulling images, the Docker provisioner can run and start containers for you. This lets you automatically start services as part of `vagrant up`. Running containers can only be configured using the Ruby block syntax with the `do...end` blocks. An example of running a container is shown below: ``` Vagrant.configure("2") do |config| config.vm.provision "docker" do |d| d.run "rabbitmq" end end ``` This will `docker run` a container with the "rabbitmq" image. Note that Vagrant uses the first parameter (the image name by default) to override any settings used in a previous `run` definition. Therefore, if you need to run multiple containers from the same image then you must specify the `image` option (documented below) with a unique name. In addition to the name, the `run` method accepts a set of options, all optional: * [`image`](#image) (string) - The image to run. This defaults to the first argument but can also be given here as an option. * [`cmd`](#cmd) (string) - The command to start within the container. If not specified, then the container's default command will be used, such as the "CMD" command [specified in the `Dockerfile`](https:/docs.docker.io/en/latest/use/builder/#cmd). * [`args`](#args-1) (string) - Extra arguments for [`docker run`](https:/docs.docker.io/en/latest/commandline/cli/#run) on the command line. These are raw arguments that are passed directly to Docker. * [`auto_assign_name`](#auto_assign_name) (boolean) - If true, the `--name` of the container will be set to the first argument of the run. By default this is true. If the name set contains a "/" (because of the image name), it will be replaced with "-". Therefore, if you do `d.run "foo/bar"`, then the name of the container will be "foo-bar". * [`daemonize`](#daemonize) (boolean) - If true, the "-d" flag is given to `docker run` to daemonize the containers. By default this is true. * [`restart`](#restart) (string) - The restart policy for the container. Defaults to "always" For example, here is how you would configure Docker to run a container with the Vagrant shared directory mounted inside of it: ``` Vagrant.configure("2") do |config| config.vm.provision "docker" do |d| d.run "ubuntu", cmd: "bash -l", args: "-v '/vagrant:/var/www'" end end ``` In case you need to run multiple containers based off the same image, you can do so by providing different names and specifying the `image` parameter to it: ``` Vagrant.configure("2") do |config| config.vm.provision "docker" do |d| d.run "db-1", image: "user/mysql" d.run "db-2", image: "user/mysql" end end ``` Other ------ This section documents some other things related to the Docker provisioner that are generally useful to know if you are using this provisioner. ### Customize `/etc/default/docker` To customize this file, use the `post_install_provisioner` shell provisioner. ``` Vagrant.configure("2") do |config| config.vm.provision "docker" do |d| d.post_install_provision "shell", inline:"echo export http_proxy='http://127.0.0.1:3128/' >> /etc/default/docker" d.run "ubuntu", cmd: "bash -l", args: "-v '/vagrant:/var/www'" end end ``` vagrant Chef Client Provisioner Chef Client Provisioner ======================== **Provisioner name: `chef_client`** The Vagrant Chef Client provisioner allows you to provision the guest using [Chef](https://www.chef.io/chef/), specifically by connecting to an existing Chef Server and registering the Vagrant machine as a node within your infrastructure. If you are just learning Chef for the first time, you probably want to start with the [Chef Solo](chef_solo) provisioner. > **Warning:** If you are not familiar with Chef and Vagrant already, I recommend starting with the [shell provisioner](shell). > > Authenticating --------------- The minimum required to use provision using Chef Client is to provide a URL to the Chef Server as well as the path to the validation key so that the node can register with the Chef Server: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_client" do |chef| chef.chef_server_url = "http://mychefserver.com" chef.validation_key_path = "validation.pem" end end ``` The node will register with the Chef Server specified, download the proper run list for that node, and provision. Specifying a Run List ---------------------- Normally, the Chef Server is responsible for specifying the run list for the node. However, you can override what the Chef Server sends down by manually specifying a run list: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_client" do |chef| # Add a recipe chef.add_recipe "apache" # Or maybe a role chef.add_role "web" end end ``` Remember, this will *override* the run list specified on the Chef server itself. Environments ------------- You can specify the [environment](https://docs.chef.io/environments.html) for the node to come up in using the `environment` configuration option: ``` Vagrant.configure("2") do |config| config.vm.provision "chef_client" do |chef| # ... chef.environment = "development" end end ``` Other Configuration Options ---------------------------- There are a few more configuration options available. These generally do not need to be modified but are available if your Chef Server requires customization of these variables. * [`client_key_path`](#client_key_path) * [`node_name`](#node_name) * [`validation_client_name`](#validation_client_name) In addition to all the options listed above, the Chef Client provisioner supports the [common options for all Chef provisioners](chef_common). Cleanup -------- When you provision your Vagrant virtual machine with Chef Server, it creates a new Chef "node" entry and Chef "client" entry on the Chef Server, using the hostname of the machine. After you tear down your guest machine, Vagrant can be configured to do it automatically with the following settings: ``` chef.delete_node = true chef.delete_client = true ``` If you do not specify it or set it to `false`, you must explicitly delete these entries from the Chef Server before you provision a new one with Chef Server. For example, using Chef's built-in `knife` tool: ``` $ knife node delete precise64 $ knife client delete precise64 ``` If you fail to do so, you will get the following error when Vagrant tries to provision the machine with Chef Client: ``` HTTP Request Returned 409 Conflict: Client already exists. ``` vagrant File Provisioner File Provisioner ================= **Provisioner name: `"file"`** The Vagrant file provisioner allows you to upload a file or directory from the host machine to the guest machine. File provisioning is a simple way to, for example, replicate your local ~/.gitconfig to the vagrant user's home directory on the guest machine so you will not have to run `git config --global` every time you provision a new VM. ``` Vagrant.configure("2") do |config| # ... other configuration config.vm.provision "file", source: "~/.gitconfig", destination: ".gitconfig" end ``` If you want to upload a folder to your guest system, it can be accomplished by using a file provisioner seen below. When copied, the resulting folder on the guest will replace `folder` as `newfolder` and place its on the guest machine. Note that if you'd like the same folder name on your guest machine, make sure that the destination path has the same name as the folder on your host. ``` Vagrant.configure("2") do |config| # ... other configuration config.vm.provision "file", source: "~/path/to/host/folder", destination: "$HOME/remote/newfolder" end ``` Prior to copying `~/path/to/host/folder` to the guest machine: ``` folder ├── script.sh ├── otherfolder │   └── hello.sh ├── goodbye.sh ├── hello.sh └── woot.sh 1 directory, 5 files ``` After to copying `~/path/to/host/folder` into `$HOME/remote/newfolder` to the guest machine: ``` newfolder ├── script.sh ├── otherfolder │   └── hello.sh ├── goodbye.sh ├── hello.sh └── woot.sh 1 directory, 5 files ``` Note that, unlike with synced folders, files or directories that are uploaded will not be kept in sync. Continuing with the example above, if you make further changes to your local ~/.gitconfig, they will not be immediately reflected in the copy you uploaded to the guest machine. The file uploads by the file provisioner are done as the *SSH or PowerShell user*. This is important since these users generally do not have elevated privileges on their own. If you want to upload files to locations that require elevated privileges, we recommend uploading them to temporary locations and then using the [shell provisioner](shell) to move them into place. Options -------- The file provisioner takes only two options, both of which are required: * [`source`](#source) (string) - Is the local path of the file or directory to be uploaded. * [`destination`](#destination) (string) - Is the remote path on the guest machine where the source will be uploaded to. The file/folder is uploaded as the SSH user over SCP, so this location must be writable to that user. The SSH user can be determined by running `vagrant ssh-config`, and defaults to "vagrant". Caveats -------- While the file provisioner does support trailing slashes or "globing", this can lead to some confusing results due to the underlying tool used to copy files and folders between the host and guests. For example, if you have a source and destination with a trailing slash defined below: ``` config.vm.provision "file", source: "~/pathfolder", destination: "/remote/newlocation/" ``` You are telling vagrant to upload `~/pathfolder` under the remote dir `/remote/newlocation`, which will look like: ``` newlocation ├── pathfolder │   └── file.sh 1 directory, 2 files ``` This behavior can also be achieved by defining your file provisioner below: ``` config.vm.provision "file", source: "~/pathfolder", destination: "/remote/newlocation/pathfolder" ``` Another example is using globing on the host machine to grab all files within a folder, but not the top level folder itself: ``` config.vm.provision "file", source: "~/otherfolder/.", destination: "/remote/otherlocation" ``` The file provisioner is defined to include all files under `~/otherfolder` to the new location `/remote/otherlocation`. This idea can be achieved by simply having your destination folder differ from the source folder: ``` config.vm.provision "file", source: "/otherfolder", destination: "/remote/otherlocation" ```
programming_docs
vagrant Shared Ansible Options Shared Ansible Options ======================= The following options are available to both Vagrant Ansible provisioners: * [`ansible`](ansible) * [`ansible_local`](ansible_local) These options get passed to the `ansible-playbook` command that ships with Ansible, either via command line arguments or environment variables, depending on Ansible own capabilities. Some of these options are for advanced usage only and should not be used unless you understand their purpose. * [`become`](#become) (boolean) - Perform all the Ansible playbook tasks [as another user](http://docs.ansible.com/ansible/become.html), different from the user used to log into the guest system. The default value is `false`. * [`become_user`](#become_user) (string) - Set the default username to be used by the Ansible `become` [privilege escalation](http://docs.ansible.com/ansible/become.html) mechanism. By default this option is not set, and the Ansible default value (`root`) will be used. * [`compatibility_mode`](#compatibility_mode) (string) - Set the **minimal** version of Ansible to be supported. Vagrant will only use parameters that are compatible with the given version. Possible values: + [`"auto"`](#quot-auto-quot-) *(Vagrant will automatically select the optimal compatibility mode by checking the Ansible version currently available)* + [`"1.8"`](#quot-1-8-quot-) *(Ansible versions prior to 1.8 should mostly work well, but some options might not be supported)* + [`"2.0"`](#quot-2-0-quot-) *(The generated Ansible inventory will be incompatible with Ansible 1.x)*By default this option is set to `"auto"`. If Vagrant is not able to detect any supported Ansible version, it will fall back on the compatibility mode `"1.8"` with a warning. Vagrant will error if the specified compatibility mode is incompatible with the current Ansible version. > **Attention:** Vagrant doesn't perform any validation between the `compatibility_mode` value and the value of the [`version`](#version) option. > > > **Compatibility Note:** This option was introduced in Vagrant 2.0. The behavior of previous Vagrant versions can be simulated by setting the `compatibility_mode` to `"1.8"`. > > * [`config_file`](#config_file) (string) - The path to an [Ansible Configuration file](https://docs.ansible.com/intro_configuration.html). By default, this option is not set, and Ansible will [search for a possible configuration file in some default locations](ansible_intro#ANSIBLE_CONFIG). * [`extra_vars`](#extra_vars) (string or hash) - Pass additional variables (with highest priority) to the playbook. This parameter can be a path to a JSON or YAML file, or a hash. Example: ``` ansible.extra_vars = { ntp_server: "pool.ntp.org", nginx: { port: 8008, workers: 4 } } ``` These variables take the highest precedence over any other variables. * [`galaxy_command`](#galaxy_command) (template string) - The command pattern used to install Galaxy roles when `galaxy_role_file` is set. The following (optional) placeholders can be used in this command pattern: + [`%{role_file}`](#role_file-) is replaced by the absolute path to the `galaxy_role_file` option + [`%{roles_path}`](#roles_path-) is - replaced by the absolute path to the `galaxy_roles_path` option when such option is defined, or - replaced by the absolute path to a `roles` subdirectory sitting in the `playbook` parent directory.By default, this option is set to `ansible-galaxy install --role-file=%{role_file} --roles-path=%{roles_path} --force` * [`galaxy_role_file`](#galaxy_role_file) (string) - The path to the Ansible Galaxy role file. By default, this option is set to `nil` and Galaxy support is then disabled. Note: if an absolute path is given, the `ansible_local` provisioner will assume that it corresponds to the exact location on the guest system. ``` ansible.galaxy_role_file = "requirements.yml" ``` * [`galaxy_roles_path`](#galaxy_roles_path) (string) - The path to the directory where Ansible Galaxy roles must be installed By default, this option is set to `nil`, which means that the Galaxy roles will be installed in a `roles` subdirectory located in the parent directory of the `playbook` file. * [`groups`](#groups) (hash) - Set of inventory groups to be included in the [auto-generated inventory file](ansible_intro). Example: ``` ansible.groups = { "web" => ["vm1", "vm2"], "db" => ["vm3"] } ``` Example with [group variables](https://docs.ansible.com/ansible/intro_inventory.html#group-variables): ``` ansible.groups = { "atlanta" => ["host1", "host2"], "atlanta:vars" => {"ntp_server" => "ntp.atlanta.example.com", "proxy" => "proxy.atlanta.example.com"} } ``` Notes: + Alphanumeric patterns are not supported (e.g. `db-[a:f]`, `vm[01:10]`). + This option has no effect when the `inventory_path` option is defined. * [`host_vars`](#host_vars) (hash) - Set of inventory host variables to be included in the [auto-generated inventory file](https://docs.ansible.com/ansible/intro_inventory.html#host-variables). Example: ``` ansible.host_vars = { "host1" => {"http_port" => 80, "maxRequestsPerChild" => 808}, "comments" => "text with spaces", "host2" => {"http_port" => 303, "maxRequestsPerChild" => 909} } ``` Note: This option has no effect when the `inventory_path` option is defined. * [`inventory_path`](#inventory_path) (string) - The path to an Ansible inventory resource (e.g. a [static inventory file](https://docs.ansible.com/intro_inventory.html), a [dynamic inventory script](https://docs.ansible.com/intro_dynamic_inventory.html) or even [multiple inventories stored in the same directory](https://docs.ansible.com/intro_dynamic_inventory.html#using-multiple-inventory-sources)). By default, this option is disabled and Vagrant generates an inventory based on the `Vagrantfile` information. * [`limit`](#limit) (string or array of strings) - Set of machines or groups from the inventory file to further control which hosts [are affected](https://docs.ansible.com/glossary.html#limit-groups). The default value is set to the machine name (taken from `Vagrantfile`) to ensure that `vagrant provision` command only affect the expected machine. Setting `limit = "all"` can be used to make Ansible connect to all machines from the inventory file. * [`playbook_command`](#playbook_command) (string) - The command used to run playbooks. The default value is `ansible-playbook` * [`raw_arguments`](#raw_arguments) (array of strings) - a list of additional `ansible-playbook` arguments. It is an *unsafe wildcard* that can be used to apply Ansible options that are not (yet) supported by this Vagrant provisioner. As of Vagrant 1.7, `raw_arguments` has the highest priority and its values can potentially override or break other Vagrant settings. Examples: + [`['--check', '-M', '/my/modules']`](#39-check-39-39-m-39-39-my-modules-39-) + [`["--connection=paramiko", "--forks=10"]`](#quot-connection-paramiko-quot-quot-forks-10-quot-) > **Attention:** The `ansible` provisioner does not support whitespace characters in `raw_arguments` elements. Therefore **don't write** something like `["-c paramiko"]`, which will result with an invalid `" paramiko"` parameter value. > > * [`skip_tags`](#skip_tags) (string or array of strings) - Only plays, roles and tasks that [*do not match* these values will be executed](https://docs.ansible.com/playbooks_tags.html). * [`start_at_task`](#start_at_task) (string) - The task name where the [playbook execution will start](https://docs.ansible.com/playbooks_startnstep.html#start-at-task). * [`sudo`](#sudo) (boolean) - Backwards compatible alias for the [`become`](#become) option. > **Deprecation:** The `sudo` option is deprecated and will be removed in a future release. Please use the [**`become`**](#become) option instead. > > * [`sudo_user`](#sudo_user) (string) - Backwards compatible alias for the [`become_user`](#become_user) option. > **Deprecation:** The `sudo_user` option is deprecated and will be removed in a future release. Please use the [**`become_user`**](#become_user) option instead. > > * [`tags`](#tags) (string or array of strings) - Only plays, roles and tasks [tagged with these values will be executed](https://docs.ansible.com/playbooks_tags.html) . * [`vault_password_file`](#vault_password_file) (string) - The path of a file containing the password used by [Ansible Vault](https://docs.ansible.com/playbooks_vault.html#vault). * [`verbose`](#verbose) (boolean or string) - Set Ansible's verbosity to obtain detailed logging Default value is `false` (minimal verbosity). Examples: `true` (equivalent to `v`), `-vvv` (equivalent to `vvv`), `vvvv`. Note that when the `verbose` option is enabled, the `ansible-playbook` command used by Vagrant will be displayed. * [`version`](#version) (string) - The expected Ansible version. This option is disabled by default. When an Ansible version is defined (e.g. `"2.1.6.0"`), the Ansible provisioner will be executed only if Ansible is installed at the requested version. When this option is set to `"latest"`, no version check is applied. > **Tip:** With the `ansible_local` provisioner, it is currently possible to use this option to specify which version of Ansible must be automatically installed, but **only** in combination with the [**`install_mode`**](ansible_local#install_mode) set to **`:pip`**. > > vagrant Ansible and Vagrant Ansible and Vagrant ==================== The information below is applicable to both Vagrant Ansible provisioners: * [`ansible`](ansible), where Ansible is executed on the **Vagrant host** * [`ansible_local`](ansible_local), where Ansible is executed on the **Vagrant guest** The list of common options for these two provisioners is documented in a [separate documentation page](ansible_common). This documentation page will not go into how to use Ansible or how to write Ansible playbooks, since Ansible is a complete deployment and configuration management system that is beyond the scope of Vagrant documentation. To learn more about Ansible, please consult the [Ansible Documentation Site](https://docs.ansible.com/). The Playbook File ------------------ The first component of a successful Ansible provisioner setup is the Ansible playbook which contains the steps that should be run on the guest. Ansible's [playbook documentation](https://docs.ansible.com/playbooks.html) goes into great detail on how to author playbooks, and there are a number of [best practices](https://docs.ansible.com/playbooks_best_practices.html) that can be applied to use Ansible's powerful features effectively. A playbook that installs and starts (or restarts) the NTP daemon via YUM looks like: ``` --- - hosts: all tasks: - name: ensure ntpd is at the latest version yum: pkg=ntp state=latest notify: - restart ntpd handlers: - name: restart ntpd service: name=ntpd state=restarted ``` You can of course target other operating systems that do not have YUM by changing the playbook tasks. Ansible ships with a number of [modules](https://docs.ansible.com/modules.html) that make running otherwise tedious tasks dead simple. ### Running Ansible The `playbook` option is strictly required by both Ansible provisioners ([`ansible`](ansible) and [`ansible_local`](ansible_local)), as illustrated in this basic Vagrantfile` configuration: ``` Vagrant.configure("2") do |config| # Use :ansible or :ansible_local to # select the provisioner of your choice config.vm.provision :ansible do |ansible| ansible.playbook = "playbook.yml" end end ``` Since an Ansible playbook can include many files, you may also collect the related files in a [directory structure](https://docs.ansible.com/playbooks_best_practices.html#directory-layout) like this: ``` . |-- Vagrantfile |-- provisioning | |-- group_vars | |-- all | |-- roles | |-- bar | |-- foo | |-- playbook.yml ``` In such an arrangement, the `ansible.playbook` path should be adjusted accordingly: ``` Vagrant.configure("2") do |config| config.vm.provision "ansible" do |ansible| ansible.playbook = "provisioning/playbook.yml" end end ``` The Inventory File ------------------- When using Ansible, it needs to know on which machines a given playbook should run. It does this by way of an [inventory](https://docs.ansible.com/intro_inventory.html) file which lists those machines. In the context of Vagrant, there are two ways to approach working with inventory files. ### Auto-Generated Inventory The first and simplest option is to not provide one to Vagrant at all. Vagrant will generate an inventory file encompassing all of the virtual machines it manages, and use it for provisioning machines. #### Example with the [`ansible`](ansible) provisioner ``` # Generated by Vagrant default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/.../.vagrant/machines/default/virtualbox/private_key' ``` Note that the generated inventory file is stored as part of your local Vagrant environment in `.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory`. #### Example with the [`ansible_local`](ansible_local) provisioner ``` # Generated by Vagrant default ansible_connection=local ``` Note that the generated inventory file is uploaded to the guest VM in a subdirectory of [`tmp_path`](ansible_local), e.g. `/tmp/vagrant-ansible/inventory/vagrant_ansible_local_inventory`. #### Host Variables As of Vagrant 1.8.0, the [`host_vars`](ansible_common#host_vars) option can be used to set [variables for individual hosts](https://docs.ansible.com/ansible/intro_inventory.html#host-variables) in the generated inventory file (see also the notes on group variables below). With this configuration example: ``` Vagrant.configure("2") do |config| config.vm.define "host1" config.vm.define "host2" config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" ansible.host_vars = { "host1" => {"http_port" => 80, "maxRequestsPerChild" => 808}, "host2" => {"http_port" => 303, "maxRequestsPerChild" => 909} } end end ``` Vagrant would generate the following inventory file: ``` # Generated by Vagrant host1 ansible_ssh_host=... http_port=80 maxRequestsPerChild=808 host2 ansible_ssh_host=... http_port=303 maxRequestsPerChild=909 ``` #### Groups and Group Variables The [`groups`](ansible_common#groups) option can be used to pass a hash of group names and group members to be included in the generated inventory file. As of Vagrant 1.8.0, it is also possible to specify [group variables](https://docs.ansible.com/ansible/intro_inventory.html#group-variables), and group members as [host ranges (with numeric or alphabetic patterns)](https://docs.ansible.com/ansible/intro_inventory.html#hosts-and-groups). With this configuration example: ``` Vagrant.configure("2") do |config| config.vm.box = "ubuntu/trusty64" config.vm.define "machine1" config.vm.define "machine2" config.vm.provision "ansible" do |ansible| ansible.playbook = "playbook.yml" ansible.groups = { "group1" => ["machine1"], "group2" => ["machine2"], "group3" => ["machine[1:2]"], "group4" => ["other_node-[a:d]"], # silly group definition "all_groups:children" => ["group1", "group2"], "group1:vars" => {"variable1" => 9, "variable2" => "example"} } end end ``` Vagrant would generate the following inventory file: ``` # Generated by Vagrant machine1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/.../.vagrant/machines/machine1/virtualbox/private_key' machine2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/.../.vagrant/machines/machine2/virtualbox/private_key' [group1] machine1 [group2] machine2 [group3] machine[1:2] [group4] other_node-[a:d] [all_groups:children] group1 group2 [group1:vars] variable1=9 variable2=example ``` **Notes:** * Prior to Vagrant 1.7.3, the `ansible_ssh_private_key_file` variable was not set in generated inventory, but passed as command line argument to `ansible-playbook` command. * The generation of group variables blocks (e.g. `[group1:vars]`) is only possible since Vagrant 1.8.0. Note however that setting variables directly in the inventory is not the [preferred practice in Ansible](https://docs.ansible.com/intro_inventory.html#splitting-out-host-and-group-specific-data). If possible, group (or host) variables should be set in `YAML` files stored in the `group_vars/` or `host_vars/` directories in the playbook (or inventory) directory instead. * Unmanaged machines and undefined groups are not added to the inventory, to avoid useless Ansible errors (e.g. *unreachable host* or *undefined child group*) For example, `machine3` and `group3` in the example below would not be added to the generated inventory file: ``` ansible.groups = { "group1" => ["machine1"], "group2" => ["machine2", "machine3"], "all_groups:children" => ["group1", "group2", "group3"] } ``` * [Host range patterns (numeric and alphabetic ranges)](https://docs.ansible.com/ansible/intro_inventory.html#hosts-and-groups) will not be validated by Vagrant. As of Vagrant 1.8.0, host range patterns will be added as group members to the inventory anyway, this might lead to errors in Ansible (e.g *unreachable host*). ### Static Inventory The second option is for situations where you would like to have more control over the inventory management. With the [`inventory_path`](ansible_common#inventory_path) option, you can reference a specific inventory resource (e.g. a static inventory file, a [dynamic inventory script](https://docs.ansible.com/intro_dynamic_inventory.html) or even [multiple inventories stored in the same directory](https://docs.ansible.com/intro_dynamic_inventory.html#using-multiple-inventory-sources)). Vagrant will then use this inventory information instead of generating it. A very simple inventory file for use with Vagrant might look like: ``` default ansible_ssh_host=192.168.111.222 ``` Where the above IP address is one set in your Vagrantfile: ``` config.vm.network :private_network, ip: "192.168.111.222" ``` **Notes:** * The machine names in `Vagrantfile` and `ansible.inventory_path` files should correspond, unless you use `ansible.limit` option to reference the correct machines. * The SSH host addresses (and ports) must obviously be specified twice, in `Vagrantfile` and `ansible.inventory_path` files. * Sharing hostnames across Vagrant host and guests might be a good idea (e.g. with some Ansible configuration task, or with a plugin like [`vagrant-hostmanager`](https://github.com/smdahlen/vagrant-hostmanager)). ### The Ansible Configuration File Certain settings in Ansible are (only) adjustable via a [configuration file](https://docs.ansible.com/intro_configuration.html), and you might want to ship such a file in your Vagrant project. When shipping an Ansible configuration file it is good to know that: * as of Ansible 1.5, the lookup order is the following: + any path set as `ANSIBLE_CONFIG` environment variable + [`ansible.cfg`](#ansible-cfg) in the runtime working directory + [`.ansible.cfg`](#ansible-cfg-1) in the user home directory + [`/etc/ansible/ansible.cfg`](#etc-ansible-ansible-cfg) * Ansible commands don't look for a configuration file relative to the playbook file location (e.g. in the same directory) * an `ansible.cfg` file located in the same directory as your `Vagrantfile` will be used by default. * it is also possible to reference any other location with the [config\_file](ansible_common#config_file) provisioner option. In this case, Vagrant will set the `ANSIBLE_CONFIG` environment variable accordingly.
programming_docs
vagrant Puppet Agent Provisioner Puppet Agent Provisioner ========================= **Provisioner name: `puppet_server`** The Vagrant Puppet agent provisioner allows you to provision the guest using [Puppet](https://www.puppetlabs.com/puppet), specifically by calling `puppet agent`, connecting to a Puppet master, and retrieving the set of modules and manifests from there. > **Warning:** If you are not familiar with Puppet and Vagrant already, I recommend starting with the [shell provisioner](shell). However, if you are comfortable with Vagrant already, Vagrant is the best way to learn Puppet. > > Options -------- The `puppet_server` provisioner takes various options. None are strictly required. They are listed below: * [`binary_path`](#binary_path) (string) - Path on the guest to Puppet's `bin/` directory. * [`client_cert_path`](#client_cert_path) (string) - Path to the client certificate for the node on your disk. This defaults to nothing, in which case a client cert will not be uploaded. * [`client_private_key_path`](#client_private_key_path) (string) - Path to the client private key for the node on your disk. This defaults to nothing, in which case a client private key will not be uploaded. * [`facter`](#facter) (hash) - Additional Facter facts to make available to the Puppet run. * [`options`](#options-1) (string or array) - Additional command line options to pass to `puppet agent` when Puppet is ran. * [`puppet_node`](#puppet_node) (string) - The name of the node. If this is not set, this will attempt to use a hostname if set via `config.vm.hostname`. Otherwise, the box name will be used. * [`puppet_server`](#puppet_server) (string) - Hostname of the Puppet server. By default "puppet" will be used. Specifying the Puppet Master ----------------------------- The quickest way to get started with the Puppet agent provisioner is to just specify the location of the Puppet master: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet_server" do |puppet| puppet.puppet_server = "puppet.example.com" end end ``` By default, Vagrant will look for the host named "puppet" on the local domain of the guest machine. Configuring the Node Name -------------------------- The node name that the agent registers as can be customized. Remember this is important because Puppet uses the node name as part of the process to compile the catalog the node will run. The node name defaults to the hostname of the guest machine, but can be customized using the Vagrantfile: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet_server" do |puppet| puppet.puppet_node = "node.example.com" end end ``` Additional Options ------------------- Puppet supports a lot of command-line flags. Basically any setting can be overridden on the command line. To give you the most power and flexibility possible with Puppet, Vagrant allows you to specify custom command line flags to use: ``` Vagrant.configure("2") do |config| config.vm.provision "puppet_server" do |puppet| puppet.options = "--verbose --debug" end end ``` vagrant Ansible Local Provisioner Ansible Local Provisioner ========================== **Provisioner name: `ansible_local`** The Vagrant Ansible Local provisioner allows you to provision the guest using [Ansible](http://ansible.com) playbooks by executing **`ansible-playbook` directly on the guest machine**. > **Warning:** If you are not familiar with Ansible and Vagrant already, I recommend starting with the [shell provisioner](shell). However, if you are comfortable with Vagrant already, Vagrant is a great way to learn Ansible. > > Setup Requirements ------------------- The main advantage of the Ansible Local provisioner in comparison to the [Ansible (remote) provisioner](ansible) is that it does not require any additional software on your Vagrant host. On the other hand, [Ansible must obviously be installed](https://docs.ansible.com/intro_installation.html#installing-the-control-machine) on your guest machine(s). **Note:** By default, Vagrant will *try* to automatically install Ansible if it is not yet present on the guest machine (see the `install` option below for more details). Usage ------ This page only documents the specific parts of the `ansible_local` provisioner. General Ansible concepts like Playbook or Inventory are shortly explained in the [introduction to Ansible and Vagrant](ansible_intro). The Ansible Local provisioner requires that all the Ansible Playbook files are available on the guest machine, at the location referred by the `provisioning_path` option. Usually these files are initially present on the host machine (as part of your Vagrant project), and it is quite easy to share them with a Vagrant [Synced Folder](../synced-folders/index). ### Simplest Configuration To run Ansible from your Vagrant guest, the basic `Vagrantfile` configuration looks like: ``` Vagrant.configure("2") do |config| # Run Ansible from the Vagrant VM config.vm.provision "ansible_local" do |ansible| ansible.playbook = "playbook.yml" end end ``` **Requirements:** * The `playbook.yml` file is stored in your Vagrant's project home directory. * The [default shared directory](../synced-folders/basic_usage) is enabled (`.` → `/vagrant`). Options -------- This section lists the *specific* options for the Ansible Local provisioner. In addition to the options listed below, this provisioner supports the [**common options** for both Ansible provisioners](ansible_common). * [`install`](#install) (boolean) - Try to automatically install Ansible on the guest system. This option is enabled by default. Vagrant will try to install (or upgrade) Ansible when one of these conditions are met: + Ansible is not installed (or cannot be found). + The [`version`](ansible_common#version) option is set to `"latest"`. + The current Ansible version does not correspond to the [`version`](ansible_common#version) option. > **Attention:** There is no guarantee that this automated installation will replace a custom Ansible setup, that might be already present on the Vagrant box. > > * [`install_mode`](#install_mode) (`:default`, `:pip`, or `:pip_args_only`) - Select the way to automatically install Ansible on the guest system. + [`:default`](#default): Ansible is installed from the operating system package manager. This mode doesn't support `version` selection. For many platforms (e.g Debian, FreeBSD, OpenSUSE) the official package repository is used, except for the following Linux distributions: - On Ubuntu-like systems, the latest Ansible release is installed from the `ppa:ansible/ansible` repository. The compatibility is maintained only for active long-term support (LTS) versions. - On RedHat-like systems, the latest Ansible release is installed from the [EPEL](http://fedoraproject.org/wiki/EPEL) repository. + [`:pip`](#pip): Ansible is installed from [PyPI](https://pypi.python.org/pypi) with [pip](https://pip.pypa.io) package installer. With this mode, Vagrant will systematically try to [install the latest pip version](https://pip.pypa.io/en/stable/installing/#installing-with-get-pip-py). With the `:pip` mode you can optionally install a specific Ansible release by setting the [`version`](ansible_common#version) option. Example: ``` config.vm.provision "ansible_local" do |ansible| ansible.playbook = "playbook.yml" ansible.install_mode = "pip" ansible.version = "2.2.1.0" end ``` With this configuration, Vagrant will install `pip` and then execute the command ``` sudo pip install --upgrade ansible==2.2.1.0 ``` + [`:pip_args_only`](#pip_args_only): This mode is very similar to the `:pip` mode, with the difference that in this case no pip arguments will be automatically set by Vagrant. Example: ``` config.vm.provision "ansible_local" do |ansible| ansible.playbook = "playbook.yml" ansible.install_mode = "pip_args_only" ansible.pip_args = "-r /vagrant/requirements.txt" end ``` With this configuration, Vagrant will install `pip` and then execute the command ``` sudo pip install -r /vagrant/requirements.txt ```The default value of `install_mode` is `:default`, and any invalid value for this option will silently fall back to the default value. * [`pip_args`](#pip_args) (string) - When Ansible is installed via pip, this option allows the definition of additional pip arguments to be passed along on the command line (for example, [`--index-url`](https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-i)). By default, this option is not set. Example: ``` config.vm.provision "ansible_local" do |ansible| ansible.playbook = "playbook.yml" ansible.install_mode = :pip ansible.pip_args = "--index-url https://pypi.internal" end ``` With this configuration, Vagrant will install `pip` and then execute the command ``` sudo pip install --index-url https://pypi.internal --upgrade ansible ``` * [`provisioning_path`](#provisioning_path) (string) - An absolute path on the guest machine where the Ansible files are stored. The `ansible-galaxy` and `ansible-playbook` commands are executed from this directory. This is the location to place an [ansible.cfg](http://docs.ansible.com/ansible/intro_configuration.html) file, in case you need it. The default value is `/vagrant`. * [`tmp_path`](#tmp_path) (string) - An absolute path on the guest machine where temporary files are stored by the Ansible Local provisioner. The default value is `/tmp/vagrant-ansible` Tips and Tricks ---------------- ### Install Galaxy Roles in a path owned by root > **Disclaimer:** This tip is not a recommendation to install galaxy roles out of the vagrant user space, especially if you rely on ssh agent forwarding to fetch the roles. Be careful that `ansible-galaxy` command is executed by default as vagrant user. Setting `galaxy_roles_path` to a folder like `/etc/ansible/roles` will fail, and `ansible-galaxy` will extract the role a second time in `/home/vagrant/.ansible/roles/`. Then if your playbook uses `become` to run as `root`, it will fail with a *"role was not found"* error. To work around that, you can use `ansible.galaxy_command` to prepend the command with `sudo`, as illustrated in the example below: ``` Vagrant.configure(2) do |config| config.vm.box = "centos/7" config.vm.provision "ansible_local" do |ansible| ansible.become = true ansible.playbook = "playbook.yml" ansible.galaxy_role_file = "requirements.yml" ansible.galaxy_roles_path = "/etc/ansible/roles" ansible.galaxy_command = "sudo ansible-galaxy install --role-file=%{role_file} --roles-path=%{roles_path} --force" end end ``` ### Ansible Parallel Execution from a Guest With the following configuration pattern, you can install and execute Ansible only on a single guest machine (the `"controller"`) to provision all your machines. ``` Vagrant.configure("2") do |config| config.vm.box = "ubuntu/trusty64" config.vm.define "node1" do |machine| machine.vm.network "private_network", ip: "172.17.177.21" end config.vm.define "node2" do |machine| machine.vm.network "private_network", ip: "172.17.177.22" end config.vm.define 'controller' do |machine| machine.vm.network "private_network", ip: "172.17.177.11" machine.vm.provision :ansible_local do |ansible| ansible.playbook = "example.yml" ansible.verbose = true ansible.install = true ansible.limit = "all" # or only "nodes" group, etc. ansible.inventory_path = "inventory" end end end ``` You need to create a static `inventory` file that corresponds to your `Vagrantfile` machine definitions: ``` controller ansible_connection=local node1 ansible_ssh_host=172.17.177.21 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/node1/virtualbox/private_key node2 ansible_ssh_host=172.17.177.22 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/node2/virtualbox/private_key [nodes] node[1:2] ``` And finally, you also have to create an [`ansible.cfg` file](https://docs.ansible.com/intro_configuration.html#openssh-specific-settings) to fully disable SSH host key checking. More SSH configurations can be added to the `ssh_args` parameter (e.g. agent forwarding, etc.) ``` [defaults] host_key_checking = no [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes ``` vagrant Basic Usage of Provisioners Basic Usage of Provisioners ============================ While Vagrant offers multiple options for how you are able to provision your machine, there is a standard usage pattern as well as some important points common to all provisioners that are important to know. Configuration -------------- First, every provisioner is configured within your [Vagrantfile](../vagrantfile/index) using the `config.vm.provision` method call. For example, the Vagrantfile below enables shell provisioning: ``` Vagrant.configure("2") do |config| # ... other configuration config.vm.provision "shell", inline: "echo hello" end ``` Every provisioner has a type, such as `"shell"`, used as the first parameter to the provisioning configuration. Following that is basic key/value for configuring that specific provisioner. Instead of basic key/value, you can also use a Ruby block for a syntax that is more like variable assignment. The following is effectively the same as the prior example: ``` Vagrant.configure("2") do |config| # ... other configuration config.vm.provision "shell" do |s| s.inline = "echo hello" end end ``` The benefit of the block-based syntax is that with more than a couple options it can greatly improve readability. Additionally, some provisioners, like the Chef provisioner, have special methods that can be called within that block to ease configuration that cannot be done with the key/value approach, or you can use this syntax to pass arguments to a shell script. The attributes that can be set in a single-line are the attributes that are set with the `=` style, such as `inline = "echo hello"` above. If the style is instead more of a function call, such as `add_recipe "foo"`, then this cannot be specified in a single line. Provisioners can also be named (since 1.7.0). These names are used cosmetically for output as well as overriding provisioner settings (covered further below). An example of naming provisioners is shown below: ``` Vagrant.configure("2") do |config| # ... other configuration config.vm.provision "bootstrap", type: "shell" do |s| s.inline = "echo hello" end end ``` Naming provisioners is simple. The first argument to `config.vm.provision` becomes the name, and then a `type` option is used to specify the provisioner type, such as `type: "shell"` above. Running Provisioners --------------------- Provisioners are run in three cases: the initial `vagrant up`, `vagrant provision`, and `vagrant reload --provision`. A `--no-provision` flag can be passed to `up` and `reload` if you do not want to run provisioners. Likewise, you can pass `--provision` to force provisioning. The `--provision-with` flag can be used if you only want to run a specific provisioner if you have multiple provisioners specified. For example, if you have a shell and Puppet provisioner and only want to run the shell one, you can do `vagrant provision --provision-with shell`. The arguments to `--provision-with` can be the provisioner type (such as "shell") or the provisioner name (such as "bootstrap" from above). Run Once, Always or Never -------------------------- By default, provisioners are only run once, during the first `vagrant up` since the last `vagrant destroy`, unless the `--provision` flag is set, as noted above. Optionally, you can configure provisioners to run on every `up` or `reload`. They will only be not run if the `--no-provision` flag is explicitly specified. To do this set the `run` option to "always", as shown below: ``` Vagrant.configure("2") do |config| config.vm.provision "shell", inline: "echo hello", run: "always" end ``` You can also set `run:` to `"never"` if you have an optional provisioner that you want to mention to the user in a "post up message" or that requires some other configuration before it is possible, then call this with `vagrant provision --provision-with bootstrap`. If you are using the block format, you must specify it outside of the block, as shown below: ``` Vagrant.configure("2") do |config| config.vm.provision "bootstrap", type: "shell", run: "never" do |s| s.inline = "echo hello" end end ``` Multiple Provisioners ---------------------- Multiple `config.vm.provision` methods can be used to define multiple provisioners. These provisioners will be run in the order they're defined. This is useful for a variety of reasons, but most commonly it is used so that a shell script can bootstrap some of the system so that another provisioner can take over later. If you define provisioners at multiple "scope" levels (such as globally in the configuration block, then in a [multi-machine](../multi-machine/index) definition, then maybe in a [provider-specific override](../providers/configuration)), then the outer scopes will always run *before* any inner scopes. For example, in the Vagrantfile below: ``` Vagrant.configure("2") do |config| config.vm.provision "shell", inline: "echo foo" config.vm.define "web" do |web| web.vm.provision "shell", inline: "echo bar" end config.vm.provision "shell", inline: "echo baz" end ``` The ordering of the provisioners will be to echo "foo", "baz", then "bar" (note the second one might not be what you expect!). Remember: ordering is *outside in*. With multiple provisioners, use the `--provision-with` setting along with names to get more fine grained control over what is run and when. Overriding Provisioner Settings -------------------------------- > **Warning: Advanced Topic!** Provisioner overriding is an advanced topic that really only becomes useful if you are already using multi-machine and/or provider overrides. If you are just getting started with Vagrant, you can safely skip this. > > When using features such as [multi-machine](../multi-machine/index) or [provider-specific overrides](../providers/configuration), you may want to define common provisioners in the global configuration scope of a Vagrantfile, but override certain aspects of them internally. Vagrant allows you to do this, but has some details to consider. To override settings, you must assign a name to your provisioner. ``` Vagrant.configure("2") do |config| config.vm.provision "foo", type: "shell", inline: "echo foo" config.vm.define "web" do |web| web.vm.provision "foo", type: "shell", inline: "echo bar" end end ``` In the above, only "bar" will be echoed, because the inline setting overloaded the outer provisioner. This overload is only effective within that scope: the "web" VM. If there were another VM defined, it would still echo "foo" unless it itself also overloaded the provisioner. **Be careful with ordering.** When overriding a provisioner in a sub-scope, the provisioner will run at *that point*. In the example below, the output would be "foo" then "bar": ``` Vagrant.configure("2") do |config| config.vm.provision "foo", type: "shell", inline: "echo ORIGINAL!" config.vm.define "web" do |web| web.vm.provision "shell", inline: "echo foo" web.vm.provision "foo", type: "shell", inline: "echo bar" end end ``` If you want to preserve the original ordering, you can specify the `preserve_order: true` flag: ``` Vagrant.configure("2") do |config| config.vm.provision "do-this", type: "shell", preserve_order: true, inline: "echo FIRST!" config.vm.provision "then-this", type: "shell", preserve_order: true, inline: "echo SECOND!" end ```
programming_docs
vagrant Creating a Base Box Creating a Base Box ==================== As with [every Vagrant provider](../providers/basic_usage), the Vagrant Hyper-V provider has a custom box format that affects how base boxes are made. Prior to reading this, you should read the [general guide to creating base boxes](../boxes/base). Actually, it would probably be most useful to keep this open in a separate tab as you may be referencing it frequently while creating a base box. That page contains important information about common software to install on the box. Additionally, it is helpful to understand the [basics of the box file format](../boxes/format). > **Advanced topic!** This is a reasonably advanced topic that a beginning user of Vagrant does not need to understand. If you are just getting started with Vagrant, skip this and use an available box. If you are an experienced user of Vagrant and want to create your own custom boxes, this is for you. > > Additional Software -------------------- In addition to the software that should be installed based on the [general guide to creating base boxes](../boxes/base), Hyper-V base boxes require some additional software. ### Hyper-V Kernel Modules You will need to install Hyper-V kernel modules. While this improves performance, it also enables necessary features such as reporting its IP address so that Vagrant can access it. You can verify Hyper-V kernel modules are properly installed by running `lsmod` on Linux machines and looking for modules prefixed with `hv_`. Additionally, you will need to verify that the "Network" tab for your virtual machine in the Hyper-V manager is reporting an IP address. If it is not reporting an IP address, Vagrant will not be able to access it. For most newer Linux distributions, the Hyper-V modules will be available out of the box. Ubuntu 12.04 requires some special steps to make networking work. These are reproduced here in case similar steps are needed with other distributions. Without these commands, Ubuntu 12.04 will not report an IP address to Hyper-V: ``` $ sudo apt-get install linux-tools-3.11.0-15-generic $ sudo apt-get install hv-kvp-daemon-init $ sudo cp /usr/lib/linux-tools/3.11.0-15/hv_* /usr/sbin/ ``` Packaging the Box ------------------ To package a Hyper-V box, export the virtual machine from the Hyper-V Manager using the "Export" feature. This will create a directory with a structure similar to the following: ``` . |-- Snapshots |-- Virtual Hard drives |-- Virtual Machines ``` Delete the "Snapshots" folder. It is of no use to the Vagrant Hyper-V provider and can only add to the size of the box if there are snapshots in that folder. Then, create the "metadata.json" file necessary for the box, as documented in [basics of the box file format](../boxes/format). The proper provider value to use for the metadata is "hyperv". Finally, create an archive of those contents (but *not* the parent folder) using a tool such as `tar`: ``` $ tar cvzf ~/custom.box ./* ``` A common mistake is to also package the parent folder by accident. Vagrant will not work in this case. To verify you've packaged it properly, add the box to Vagrant and try to bring up the machine. Additional Help ---------------- There is also some less structured help available from the experience of other users. These are not official documentation but if you are running into trouble they may help you: * [Ubuntu 14.04.2 without secure boot](https://github.com/hashicorp/vagrant/issues/5419#issuecomment-86235427) vagrant Hyper-V Hyper-V ======== Vagrant comes with support out of the box for [Hyper-V](https://en.wikipedia.org/wiki/Hyper-V), a native hypervisor written by Microsoft. Hyper-V is available by default for almost all Windows 8.1 and later installs. The Hyper-V provider is compatible with Windows 8.1 and later only. Prior versions of Hyper-V do not include the necessary APIs for Vagrant to work. Hyper-V must be enabled prior to using the provider. Most Windows installations will not have Hyper-V enabled by default. Hyper-V is available by default for almost all Windows Enterprise, Professional, or Education 8.1 and later installs. To enable Hyper-V, go to "Programs and Features", click on "Turn Windows features on or off" and check the box next to "Hyper-V". Or install via PowerShell with: `Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All` See official documentation [here](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). > **Warning:** Enabling Hyper-V will cause VirtualBox, VMware, and any other virtualization technology to no longer work. See [this blog post](http://www.hanselman.com/blog/SwitchEasilyBetweenVirtualBoxAndHyperVWithABCDEditBootEntryInWindows81.aspx) for an easy way to create a boot entry to boot Windows without Hyper-V enabled, if there will be times you will need other hypervisors. > > vagrant Configuration Configuration ============== The Vagrant Hyper-V provider has some provider-specific configuration options you may set. A complete reference is shown below: * [`auto_start_action`](#auto_start_action) (Nothing, StartIfRunning, Start) - Automatic start action for VM on host startup. Default: Nothing. * [`auto_stop_action`](#auto_stop_action) (ShutDown, TurnOff, Save) - Automatic stop action for VM on host shutdown. Default: ShutDown. * [`cpus`](#cpus) (integer) - Number of virtual CPUs allocated to VM at startup. * [`differencing_disk`](#differencing_disk) (boolean) - **Deprecated** Use differencing disk instead of cloning entire VHD (use `linked_clone` instead) Default: false. * [`enable_virtualization_extensions`](#enable_virtualization_extensions) (boolean) - Enable virtualization extensions for the virtual CPUs. Default: false * [`enable_checkpoints`](#enable_checkpoints) (boolean) Enable checkpoints of the VM. Default: false * [`enable_automatic_checkpoints`](#enable_automatic_checkpoints) (boolean) Enable automatic checkpoints of the VM. Default: false * [`ip_address_timeout`](#ip_address_timeout) (integer) - Number of seconds to wait for the VM to report an IP address. Default: 120. * [`linked_clone`](#linked_clone) (boolean) - Use differencing disk instead of cloning entire VHD. Default: false * [`mac`](#mac) (string) - MAC address for the guest network interface * [`maxmemory`](#maxmemory) (integer) - Maximum number of megabytes allowed to be allocated for the VM. When set Dynamic Memory Allocation will be enabled. * [`memory`](#memory) (integer) - Number of megabytes allocated to VM at startup. If `maxmemory` is set, this will be amount of memory allocated at startup. * [`vlan_id`](#vlan_id) (integer) - VLAN ID for the guest network interface. * [`vmname`](#vmname) (string) - Name of virtual machine as shown in Hyper-V manager. Default: Generated name. * [`vm_integration_services`](#vm_integration_services) (Hash) - Hash to set the state of integration services. (Note: Unknown key values will be passed directly.) + [`guest_service_interface`](#guest_service_interface) (boolean) + [`heartbeat`](#heartbeat) (boolean) + [`key_value_pair_exchange`](#key_value_pair_exchange) (boolean) + [`shutdown`](#shutdown) (boolean) + [`time_synchronization`](#time_synchronization) (boolean) + [`vss`](#vss) (boolean) VM Integration Services ------------------------ The `vm_integration_services` configuration option consists of a simple Hash. The key values are the names of VM integration services to enable or disable for the VM. Vagrant includes an internal mapping of known services which allows them to be provided in a "snake case" format. When a provided key is unknown, the key value is used "as-is" without any modifications. For example, if a new `CustomVMSRV` VM integration service was added and Vagrant is not aware of this new service name, it can be provided as the key value explicitly: ``` config.vm.provider "hyperv" do |h| h.vm_integration_services = { guest_service_interface: true, CustomVMSRV: true } end ``` This example would enable the `GuestServiceInterface` (which Vagrant is aware) and `CustomVMSRV` (which Vagrant is *not* aware) VM integration services. vagrant Limitations Limitations ============ The Vagrant Hyper-V provider works in almost every way like the VirtualBox or VMware provider would, but has some limitations that are inherent to Hyper-V itself. Limited Networking ------------------- Vagrant does not yet know how to create and configure new networks for Hyper-V. When launching a machine with Hyper-V, Vagrant will prompt you asking what virtual switch you want to connect the virtual machine to. A result of this is that networking configurations in the Vagrantfile are completely ignored with Hyper-V. Vagrant cannot enforce a static IP or automatically configure a NAT. However, the IP address of the machine will be reported as part of the `vagrant up`, and you can use that IP address as if it were a host only network. Snapshots ---------- Restoring snapshot VMs using `vagrant snapshot pop` or `vagrant snapshot restore` will sometimes raise errors when mounting SMB shared folders, however these mounts will still work inside the guest. vagrant Usage Usage ====== The Vagrant Hyper-V provider is used just like any other provider. Please read the general [basic usage](../providers/basic_usage) page for providers. The value to use for the `--provider` flag is `hyperv`. Hyper-V also requires that you execute Vagrant with administrative privileges. Creating and managing virtual machines with Hyper-V requires admin rights. Vagrant will show you an error if it does not have the proper permissions. Boxes for Hyper-V can be easily found on [HashiCorp's Vagrant Cloud](https://vagrantcloud.com/boxes/search). To get started, you might want to try the `hashicorp/precise64` box. vagrant Synced Folders Synced Folders =============== Synced folders enable Vagrant to sync a folder on the host machine to the guest machine, allowing you to continue working on your project's files on your host machine, but use the resources in the guest machine to compile or run your project. By default, Vagrant will share your project directory (the directory with the [Vagrantfile](../vagrantfile/index)) to `/vagrant`. Read the [basic usage](basic_usage) page to get started with synced folders. vagrant NFS NFS ==== In some cases the default shared folder implementations (such as VirtualBox shared folders) have high performance penalties. If you are seeing less than ideal performance with synced folders, [NFS](https://en.wikipedia.org/wiki/Network_File_System_%28protocol%29) can offer a solution. Vagrant has built-in support to orchestrate the configuration of the NFS server on the host and guest for you. > **Windows users:** NFS folders do not work on Windows hosts. Vagrant will ignore your request for NFS synced folders on Windows. > > Prerequisites -------------- Before using synced folders backed by NFS, the host machine must have `nfsd` installed, the NFS server daemon. This comes pre-installed on Mac OS X, and is typically a simple package install on Linux. Additionally, the guest machine must have NFS support installed. This is also usually a simple package installation away. If you are using the VirtualBox provider, you will also need to make sure you have a [private network set up](../networking/private_network). This is due to a limitation of VirtualBox's built-in networking. With VMware, you do not need this. Enabling NFS Synced Folders ---------------------------- To enable NFS, just add the `type: "nfs"` flag onto your synced folder: ``` Vagrant.configure("2") do |config| config.vm.synced_folder ".", "/vagrant", type: "nfs" end ``` If you add this to an existing Vagrantfile that has a running guest machine, be sure to `vagrant reload` to see your changes. NFS Synced Folder Options -------------------------- NFS synced folders have a set of options that can be specified that are unique to NFS. These are listed below. These options can be specified in the final part of the `config.vm.synced_folder` definition, along with the `type` option. * [`nfs_export`](#nfs_export) (boolean) - If this is false, then Vagrant will not modify your `/etc/exports` automatically and assumes you've done so already. * [`nfs_udp`](#nfs_udp) (boolean) - Whether or not to use UDP as the transport. UDP is faster but has some limitations (see the NFS documentation for more details). This defaults to true. * [`nfs_version`](#nfs_version) (string | integer) - The NFS protocol version to use when mounting the folder on the guest. This defaults to 3. NFS Global Options ------------------- There are also more global NFS options you can set with `config.nfs` in the Vagrantfile. These are documented below: * [`functional`](#functional) (bool) - Defaults to true. If false, then NFS will not be used as a synced folder type. If a synced folder specifically requests NFS, it will error. * [`map_uid`](#map_uid) and `map_gid` (int) - The UID/GID, respectively, to map all read/write requests too. This will not affect the owner/group within the guest machine itself, but any writes will behave as if they were written as this UID/GID on the host. This defaults to the current user running Vagrant. * [`verify_installed`](#verify_installed) (bool) - Defaults to true. If this is false, then Vagrant will skip checking if NFS is installed. Specifying NFS Arguments ------------------------- In addition to the options specified above, it is possible for Vagrant to specify alternate NFS arguments when mounting the NFS share by using the `mount_options` key. For example, to use the `actimeo=2` client mount option: ``` config.vm.synced_folder ".", "/vagrant", type: "nfs", mount_options: ['actimeo=2'] ``` This would result in the following `mount` command being executed on the guest: ``` mount -o 'actimeo=2' 172.28.128.1:'/path/to/vagrantfile' /vagrant ``` You can also tweak the arguments specified in the `/etc/exports` template when the mount is added, by using the OS-specific `linux__nfs_options` or `bsd__nfs_options` keys. Note that these options completely override the default arguments that are added by Vagrant automatically. For example, to make the NFS share asynchronous: ``` config.vm.synced_folder ".", "/vagrant", type: "nfs", linux__nfs_options: ['rw','no_subtree_check','all_squash','async'] ``` This would result in the following content in `/etc/exports` on the host (note the added `async` flag): ``` # VAGRANT-BEGIN: 21171 5b8f0135-9e73-4166-9bfd-ac43d5f14261 "/path/to/vagrantfile" 172.28.128.5(rw,no_subtree_check,all_squash,async,anonuid=21171,anongid=660,fsid=3382034405) # VAGRANT-END: 21171 5b8f0135-9e73-4166-9bfd-ac43d5f14261 ``` Root Privilege Requirement --------------------------- To configure NFS, Vagrant must modify system files on the host. Therefore, at some point during the `vagrant up` sequence, you may be prompted for administrative privileges (via the typical `sudo` program). These privileges are used to modify `/etc/exports` as well as to start and stop the NFS server daemon. If you do not want to type your password on every `vagrant up`, Vagrant uses thoughtfully crafted commands to make fine-grained sudoers modifications possible to avoid entering your password. Below, we have a couple example sudoers entries. Note that you may have to modify them *slightly* on certain hosts because the way Vagrant modifies `/etc/exports` changes a bit from OS to OS. If the commands below are located in non-standard paths, modify them as appropriate. For \*nix users, make sure to edit your `/etc/sudoers` file with `visudo`. It protects you against syntax errors which could leave you without the ability to gain elevated privileges. All of the snippets below require Vagrant version 1.7.3 or higher. > **Use the appropriate group for your user** Depending on how your machine is configured, you might need to use a different group than the ones listed in the examples below. > > For OS X, sudoers should have this entry: ``` Cmnd_Alias VAGRANT_EXPORTS_ADD = /usr/bin/tee -a /etc/exports Cmnd_Alias VAGRANT_NFSD = /sbin/nfsd restart Cmnd_Alias VAGRANT_EXPORTS_REMOVE = /usr/bin/sed -E -e /*/ d -ibak /etc/exports %admin ALL=(root) NOPASSWD: VAGRANT_EXPORTS_ADD, VAGRANT_NFSD, VAGRANT_EXPORTS_REMOVE ``` For Ubuntu Linux , sudoers should look like this: ``` Cmnd_Alias VAGRANT_EXPORTS_CHOWN = /bin/chown 0\:0 /tmp/* Cmnd_Alias VAGRANT_EXPORTS_MV = /bin/mv -f /tmp/* /etc/exports Cmnd_Alias VAGRANT_NFSD_CHECK = /etc/init.d/nfs-kernel-server status Cmnd_Alias VAGRANT_NFSD_START = /etc/init.d/nfs-kernel-server start Cmnd_Alias VAGRANT_NFSD_APPLY = /usr/sbin/exportfs -ar %sudo ALL=(root) NOPASSWD: VAGRANT_EXPORTS_CHOWN, VAGRANT_EXPORTS_MV, VAGRANT_NFSD_CHECK, VAGRANT_NFSD_START, VAGRANT_NFSD_APPLY ``` For Fedora Linux, sudoers might look like this (given your user belongs to the vagrant group): ``` Cmnd_Alias VAGRANT_EXPORTS_CHOWN = /bin/chown 0\:0 /tmp/* Cmnd_Alias VAGRANT_EXPORTS_MV = /bin/mv -f /tmp/* /etc/exports Cmnd_Alias VAGRANT_NFSD_CHECK = /usr/bin/systemctl status --no-pager nfs-server.service Cmnd_Alias VAGRANT_NFSD_START = /usr/bin/systemctl start nfs-server.service Cmnd_Alias VAGRANT_NFSD_APPLY = /usr/sbin/exportfs -ar %vagrant ALL=(root) NOPASSWD: VAGRANT_EXPORTS_CHOWN, VAGRANT_EXPORTS_MV, VAGRANT_NFSD_CHECK, VAGRANT_NFSD_START, VAGRANT_NFSD_APPLY ``` If you don't want to edit `/etc/sudoers` directly, you can create `/etc/sudoers.d/vagrant-syncedfolders` with the appropriate entries, assuming `/etc/sudoers.d` has been enabled. Other Notes ------------ **Encrypted folders:** If you have an encrypted disk, then NFS very often will refuse to export the filesystem. The error message given by NFS is often not clear. One error message seen is `<path> does not support NFS`. There is no workaround for this other than sharing a directory which is not encrypted. **Version 4:** UDP is generally not a valid transport protocol for NFSv4. Early implementations of NFS 4.0 still allowed UDP which allows the UDP transport protocol to be used in rare cases. RFC5661 explicitly states UDP alone should not be used for the transport protocol in NFS 4.1. Errors due to unsupported transport protocols for specific versions of NFS are not always clear. A common error message when attempting to use UDP with NFSv4: ``` mount.nfs: an incorrect mount option was specified ``` When using NFSv4, ensure the `nfs_udp` option is set to false. For example: ``` config.vm.synced_folder ".", "/vagrant", type: "nfs", nfs_version: 4, nfs_udp: false ``` For more information about transport protocols and NFS version 4 see: * NFSv4.0 - [RFC7530](https://tools.ietf.org/html/rfc7530#section-3.1) * NFSv4.1 - [RFC5661](https://tools.ietf.org/html/rfc5661#section-2.9.1) vagrant RSync RSync ====== **Synced folder type:** `rsync` Vagrant can use [rsync](https://en.wikipedia.org/wiki/Rsync) as a mechanism to sync a folder to the guest machine. This synced folder type is useful primarily in situations where other synced folder mechanisms are not available, such as when NFS or VirtualBox shared folders are not available in the guest machine. The rsync synced folder does a one-time one-way sync from the machine running to the machine being started by Vagrant. The [rsync](../cli/rsync) and [rsync-auto](../cli/rsync-auto) commands can be used to force a resync and to automatically resync when changes occur in the filesystem. Without running these commands, Vagrant only syncs the folders on `vagrant up` or `vagrant reload`. Prerequisites -------------- To use the rsync synced folder type, the machine running Vagrant must have `rsync` (or `rsync.exe`) on the path. This executable is expected to behave like the standard rsync tool. On Windows, rsync installed with Cygwin or MinGW will be detected by Vagrant and works well. The destination machine must also have rsync installed, but Vagrant can automatically install rsync into many operating systems. If Vagrant is unable to automatically install rsync for your operating system, it will tell you. The destination folder will be created as the user initiating the connection, this is `vagrant` by default. This user requires the appropriate permissions on the destination folder. Options -------- The rsync synced folder type accepts the following options: * [`rsync__args`](#rsync__args) (array of strings) - A list of arguments to supply to `rsync`. By default this is `["--verbose", "--archive", "--delete", "-z", "--copy-links"]`. * [`rsync__auto`](#rsync__auto) (boolean) - If false, then `rsync-auto` will not watch and automatically sync this folder. By default, this is true. **Note**: This option will not automatically invoke the `rsync-auto` subcommand. * [`rsync__chown`](#rsync__chown) (boolean) - If false, then the [`owner` and `group`](basic_usage) options for the synced folder are ignored and Vagrant will not execute a recursive `chown`. This defaults to true. This option exists because the `chown` causes issues for some development environments. Note that any `rsync__args` options for ownership **will be overridden** by `rsync__chown`. * [`rsync__exclude`](#rsync__exclude) (string or array of strings) - A list of files or directories to exclude from the sync. The values can be any acceptable rsync exclude pattern. By default, the ".vagrant/" directory is excluded. We recommend excluding revision control directories such as ".git/" as well. * [`rsync__rsync_path`](#rsync__rsync_path) (string) - The path on the remote host where rsync is and how it is executed. This is platform specific but defaults to "sudo rsync" for many guests. * [`rsync__verbose`](#rsync__verbose) (boolean) - If true, then the output from the rsync process will be echoed to the console. The output of rsync is subject to `rsync__args` of course. By default, this is false. Example -------- The following is an example of using RSync to sync a folder: ``` Vagrant.configure("2") do |config| config.vm.synced_folder ".", "/vagrant", type: "rsync", rsync__exclude: ".git/" end ``` Rsync to a restricted folder ----------------------------- If required to copy to a destination where `vagrant` user does not have permissions, use `"--rsync-path='sudo rsync'"` to run rsync with sudo on the guest ``` Vagrant.configure("2") do |config| config.vm.synced_folder "bin", "/usr/local/bin", type: "rsync", rsync__exclude: ".git/", rsync__args: ["--verbose", "--rsync-path='sudo rsync'", "--archive", "--delete", "-z"] end ```
programming_docs
vagrant VirtualBox VirtualBox =========== If you are using the Vagrant VirtualBox [provider](../providers/index), then VirtualBox shared folders are the default synced folder type. These synced folders use the VirtualBox shared folder system to sync file changes from the guest to the host and vice versa. Options -------- * [`SharedFoldersEnableSymlinksCreate`](#sharedfoldersenablesymlinkscreate) (boolean) - If false, will disable the ability to create symlinks with the given virtualbox shared folder. Defaults to true if the option is not present. Caveats -------- There is a [VirtualBox bug](https://github.com/hashicorp/vagrant/issues/351#issuecomment-1339640) related to `sendfile` which can result in corrupted or non-updating files. You should deactivate `sendfile` in any web servers you may be running. In Nginx: ``` sendfile off; ``` In Apache: ``` EnableSendfile Off ``` vagrant SMB SMB ==== **Synced folder type:** `smb` Vagrant can use [SMB](https://en.wikipedia.org/wiki/Server_Message_Block) as a mechanism to create a bi-directional synced folder between the host machine and the Vagrant machine. SMB is built-in to Windows machines and provides a higher performance alternative to some other mechanisms such as VirtualBox shared folders. > SMB is currently only supported when the host machine is Windows or macOS. The guest machine can be Windows, Linux, or macOS. > > Prerequisites -------------- ### Windows Host To use the SMB synced folder type on a Windows host, the machine must have PowerShell version 3 or later installed. In addition, when Vagrant attempts to create new SMB shares, or remove existing SMB shares, Administrator privileges will be required. Vagrant will request these privileges using UAC. ### macOS Host To use the SMB synced folder type on a macOS host, file sharing must be enabled for the local account. Enable SMB file sharing by following the instructions below: * Open "System Preferences" * Click "Sharing" * Check the "On" checkbox next to "File Sharing" * Click "Options" * Check "Share files and folders using SMB" * Check the "On" checkbox next to your username within "Windows File Sharing" * Click "Done" When Vagrant attempts to create new SMB shares, or remove existing SMB shares, root access will be required. Vagrant will request these privileges using `sudo` to run the `/usr/sbin/sharing` command. Adding the following to the system's `sudoers` configuration will allow Vagrant to manage SMB shares without requiring a password each time: ``` Cmnd_Alias VAGRANT_SMB_ADD = /usr/sbin/sharing -a * -S * -s * -g * -n * Cmnd_Alias VAGRANT_SMB_REMOVE = /usr/sbin/sharing -r * Cmnd_Alias VAGRANT_SMB_LIST = /usr/sbin/sharing -l Cmnd_Alias VAGRANT_SMB_PLOAD = /bin/launchctl load -w /System/Library/LaunchDaemons/com.apple.smb.preferences.plist Cmnd_Alias VAGRANT_SMB_DLOAD = /bin/launchctl load -w /System/Library/LaunchDaemons/com.apple.smbd.plist Cmnd_Alias VAGRANT_SMB_DSTART = /bin/launchctl start com.apple.smbd %admin ALL=(root) NOPASSWD: VAGRANT_SMB_ADD, VAGRANT_SMB_REMOVE, VAGRANT_SMB_LIST, VAGRANT_SMB_PLOAD, VAGRANT_SMB_DLOAD, VAGRANT_SMB_DSTART ``` ### Guests The destination machine must be able to mount SMB filesystems. On Linux the package to do this is usually called `smbfs` or `cifs`. Vagrant knows how to automatically install this for some operating systems. Options -------- The SMB synced folder type has a variety of options it accepts: * [`smb_host`](#smb_host) (string) - The host IP where the SMB mount is located. If this is not specified, Vagrant will attempt to determine this automatically. * [`smb_password`](#smb_password) (string) - The password used for authentication to mount the SMB mount. This is the password for the username specified by `smb_username`. If this is not specified, Vagrant will prompt you for it. It is highly recommended that you do not set this, since it would expose your password directly in your Vagrantfile. * [`smb_username`](#smb_username) (string) - The username used for authentication to mount the SMB mount. This is the username to access the mount, *not* the username of the account where the folder is being mounted to. This is usually your Windows username. If you sign into a domain, specify it as `user@domain`. If this option is not specified, Vagrant will prompt you for it. Example -------- The following is an example of using SMB to sync a folder: ``` Vagrant.configure("2") do |config| config.vm.synced_folder ".", "/vagrant", type: "smb" end ``` Preventing Idle Disconnects ---------------------------- On Windows, if a file is not accessed for some period of time, it may disconnect from the guest and prevent the guest from accessing the SMB-mounted share. To prevent this, the following command can be used in a superuser shell. Note that you should research if this is the right option for you. ``` net config server /autodisconnect:-1 ``` Common Issues -------------- ### "wrong fs type" Error If during mounting on Linux you are seeing an error message that includes the words "wrong fs type," this is because the SMB kernel extension needs to be updated in the OS. If updating the kernel extension is not an option, you can workaround the issue by specifying the following options on your synced folder: ``` mount_options: ["username=USERNAME","password=PASSWORD"] ``` Replace "USERNAME" and "PASSWORD" with your SMB username and password. Vagrant 1.8 changed SMB mounting to use the more secure credential file mechanism. However, many operating systems ship with an outdated filesystem type for SMB out of the box which does not support this. The above workaround reverts Vagrant to the insecure before, but causes it work. vagrant Basic Usage Basic Usage ============ Configuration -------------- Synced folders are configured within your Vagrantfile using the `config.vm.synced_folder` method. Usage of the configuration directive is very simple: ``` Vagrant.configure("2") do |config| # other config here config.vm.synced_folder "src/", "/srv/website" end ``` The first parameter is a path to a directory on the host machine. If the path is relative, it is relative to the project root. The second parameter must be an absolute path of where to share the folder within the guest machine. This folder will be created (recursively, if it must) if it does not exist. By default, Vagrant mounts the synced folders with the owner/group set to the SSH user and any parent folders set to root. Options -------- You may also specify additional optional parameters when configuring synced folders. These options are listed below. More detailed examples of using some of these options are shown below this section, note the owner/group example supplies two additional options separated by commas. In addition to these options, the specific synced folder type might allow more options. See the documentation for your specific synced folder type for more details. The built-in synced folder types are documented in other pages available in the navigation for these docs. * [`create`](#create) (boolean) - If true, the host path will be created if it does not exist. Defaults to false. * [`disabled`](#disabled) (boolean) - If true, this synced folder will be disabled and will not be setup. This can be used to disable a previously defined synced folder or to conditionally disable a definition based on some external factor. * [`group`](#group) (string) - The group that will own the synced folder. By default this will be the SSH user. Some synced folder types do not support modifying the group. * [`mount_options`](#mount_options) (array) - A list of additional mount options to pass to the `mount` command. * [`owner`](#owner) (string) - The user who should be the owner of this synced folder. By default this will be the SSH user. Some synced folder types do not support modifying the owner. * [`type`](#type) (string) - The type of synced folder. If this is not specified, Vagrant will automatically choose the best synced folder option for your environment. Otherwise, you can specify a specific type such as "nfs". * [`id`](#id) (string) - The name for the mount point of this synced folder in the guest machine. This shows up when you run `mount` in the guest machine. Enabling --------- Synced folders are automatically setup during `vagrant up` and `vagrant reload`. Disabling ---------- Synced folders can be disabled by adding the `disabled` option to any definition: ``` Vagrant.configure("2") do |config| config.vm.synced_folder "src/", "/srv/website", disabled: true end ``` Disabling the default `/vagrant` share can be done as follows: ``` config.vm.synced_folder ".", "/vagrant", disabled: true ``` Modifying the Owner/Group -------------------------- Sometimes it is preferable to mount folders with a different owner/group than the default SSH user. Keep in mind that these options will only affect the synced folder itself. If you want to modify the owner/group of the synced folder's parent folders use a script. It is possible to set these options: ``` config.vm.synced_folder "src/", "/srv/website", owner: "root", group: "root" ``` *NOTE: Owner and group IDs defined within `mount_options` will have precedence over the `owner` and `group` options.* For example, given the following configuration: ``` config.vm.synced_folder ".", "/vagrant", owner: "vagrant", group: "vagrant", mount_options: ["uid=1234", "gid=1234"] ``` the mounted synced folder will be owned by the user with ID `1234` and the group with ID `1234`. The `owner` and `group` options will be ignored. Symbolic Links --------------- Support for symbolic links across synced folder implementations and host/guest combinations is not consistent. Vagrant does its best to make sure symbolic links work by configuring various hypervisors (such as VirtualBox), but some host/guest combinations still do not work properly. This can affect some development environments that rely on symbolic links. The recommendation is to make sure to test symbolic links on all the host/guest combinations you sync folders on if this is important to you. vagrant Uninstalling Vagrant Uninstalling Vagrant ===================== Uninstalling Vagrant is easy and straightforward. You can either uninstall the Vagrant binary, the user data, or both. The sections below cover how to do this on every platform. Removing the Vagrant Program ----------------------------- Removing the Vagrant program will remove the `vagrant` binary and all dependencies from your machine. After uninstalling the program, you can always [reinstall](index) again using standard methods. On **Windows** > Uninstall using the add/remove programs section of the control panel > > On **Mac OS X**: ``` rm -rf /opt/vagrant rm -f /usr/local/bin/vagrant sudo pkgutil --forget com.vagrant.vagrant ``` On **Linux**: ``` rm -rf /opt/vagrant rm -f /usr/bin/vagrant ``` Removing User Data ------------------- Removing the user data will remove all [boxes](../boxes), [plugins](../plugins/index), license files, and any stored state that may be used by Vagrant. Removing the user data effectively makes Vagrant think it is a fresh install. On all platforms, remove the `~/.vagrant.d` directory to delete the user data. When debugging, the Vagrant support team may ask you to remove this directory. Before removing this directory, please make a backup. Running Vagrant will automatically regenerate any data necessary to run, so it is safe to remove the user data at any time. vagrant Installing Vagrant Installing Vagrant =================== Installing Vagrant is extremely easy. Head over to the [Vagrant downloads page](https://www.vagrantup.com/downloads.html) and get the appropriate installer or package for your platform. Install the package using standard procedures for your operating system. The installer will automatically add `vagrant` to your system path so that it is available in terminals. If it is not found, please try logging out and logging back in to your system (this is particularly necessary sometimes for Windows). > **Looking for the gem install?** Vagrant 1.0.x had the option to be installed as a [RubyGem](https://en.wikipedia.org/wiki/RubyGems). This installation method is no longer supported. If you have an old version of Vagrant installed via Rubygems, please remove it prior to installing newer versions of Vagrant. > > > **Beware of system package managers!** Some operating system distributions include a vagrant package in their upstream package repos. Please do not install Vagrant in this manner. Typically these packages are missing dependencies or include very outdated versions of Vagrant. If you install via your system's package manager, it is very likely that you will experience issues. Please use the official installers on the downloads page. > > Running Multiple Hypervisors ----------------------------- Sometimes, certain hypervisors do not allow you to bring up virtual machines if more than one hypervisor is in use. If you are lucky, you might see the following error message come up when trying to bring up a virtual machine with Vagrant and VirtualBox: ``` There was an error while executing `VBoxManage`, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below. Command: ["startvm", <ID of the VM>, "--type", "headless"] Stderr: VBoxManage: error: VT-x is being used by another hypervisor (VERR_VMX_IN_VMX_ROOT_MODE). VBoxManage: error: VirtualBox can't operate in VMX root mode. Please disable the KVM kernel extension, recompile your kernel and reboot (VERR_VMX_IN_VMX_ROOT_MODE) VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole ``` Other operating systems like Windows will blue screen if you attempt to bring up a VirtualBox VM with Hyper-V enabled. Below are a couple of ways to ensure you can use Vagrant and VirtualBox if another hypervisor is present. ### Linux, VirtualBox, and KVM The above error message is because another hypervisor (like KVM) is in use. We must blacklist these in order for VirtualBox to run correctly. First find out the name of the hypervisor: ``` $ lsmod | grep kvm kvm_intel 204800 6 kvm 593920 1 kvm_intel irqbypass 16384 1 kvm ``` The one we're interested in is `kvm_intel`. You might have another. Blacklist the hypervisor (run the following as root): ``` # echo 'blacklist kvm-intel' >> /etc/modprobe.d/blacklist.conf ``` Restart your machine and try running vagrant again. ### Windows, VirtualBox, and Hyper-V If you wish to use VirtualBox on Windows, you must ensure that Hyper-V is not enabled on Windows. You can turn off the feature by running this Powershell command: ``` Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All ``` You can also disable it by going through the Windows system settings: * Right click on the Windows button and select ‘Apps and Features’. * Select Turn Windows Features on or off. * Unselect Hyper-V and click OK. You might have to reboot your machine for the changes to take effect. More information about Hyper-V can be read [here](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). vagrant Upgrading From Vagrant 1.0.x Upgrading From Vagrant 1.0.x ============================= The upgrade process from 1.0.x to 1.x is straightforward. Vagrant is quite [backwards compatible](backwards-compatibility) with Vagrant 1.0.x, so you can simply reinstall Vagrant over your previous installation by downloading the latest package and installing it using standard procedures for your operating system. As the [backwards compatibility](backwards-compatibility) page says, **Vagrant 1.0.x plugins will not work with Vagrant 1.1+**. Many of these plugins have been updated to work with newer versions of Vagrant, so you can look to see if they've been updated. If not however, you will have to remove them before upgrading. It is recommended you remove *all* plugins before upgrading, and then slowly add back the plugins. This usually makes for a smoother upgrade process. > **If your version of Vagrant was installed via Rubygems**, you must uninstall the old version prior to installing the package for the new version of Vagrant. The Rubygems installation is no longer supported. > > vagrant Backwards Compatibility Backwards Compatibility ======================== For 1.0.x ---------- Vagrant 1.1+ provides full backwards compatibility for valid Vagrant 1.0.x Vagrantfiles which do not use plugins. After installing Vagrant 1.1, your 1.0.x environments should continue working without modifications, and existing running machines will continue to be managed properly. This compatibility layer will remain in Vagrant up to and including Vagrant 2.0. It may still exist after that, but Vagrant's compatibility promise is only for two versions. Seeing that major Vagrant releases take years to develop and release, it is safe to stick with your version 1.0.x Vagrantfile for the time being. If you use any Vagrant 1.0.x plugins, you must remove references to these from your Vagrantfile prior to upgrading. Vagrant 1.1+ introduces a new plugin format that will protect against this sort of incompatibility from ever happening again. For 1.x -------- Backwards compatibility between 1.x is not promised, and Vagrantfile syntax stability is not promised until 2.0 final. Any backwards incompatibilities within 1.x will be clearly documented. This is similar to how Vagrant 0.x was handled. In practice, Vagrant 0.x only introduced a handful of backwards incompatibilities during the entire development cycle, but the possibility of backwards incompatibilities is made clear so people are not surprised. Vagrant 2.0 final will have a stable Vagrantfile format that will remain backwards compatible, just as 1.0 is considered stable. vagrant Installing Vagrant from Source Installing Vagrant from Source =============================== Installing Vagrant from source is an advanced topic and is only recommended when using the official installer is not an option. This page details the steps and prerequisites for installing Vagrant from source. Install Ruby ------------- You must have a modern Ruby (>= 2.2) in order to develop and build Vagrant. The specific Ruby version is documented in the Vagrant's `gemspec`. Please refer to the `vagrant.gemspec` in the repository on GitHub, as it will contain the most up-to-date requirement. This guide will not discuss how to install and manage Ruby. However, beware of the following pitfalls: * Do **NOT** use the system Ruby - use a Ruby version manager like rvm or chruby * Vagrant plugins are configured based on current environment. If plugins are installed using Vagrant from source, they will not work from the package based Vagrant installation. Clone Vagrant -------------- Clone Vagrant's repository from GitHub into the directory where you keep code on your machine: ``` $ git clone https://github.com/hashicorp/vagrant.git ``` Next, `cd` into that path. All commands will be run from this path: ``` $ cd /path/to/your/vagrant/clone ``` Run the `bundle` command with a required version\* to install the requirements: ``` $ bundle install ``` You can now run Vagrant by running `bundle exec vagrant` from inside that directory. Use Locally ------------ In order to use your locally-installed version of Vagrant in other projects, you will need to create a binstub and add it to your path. First, run the following command from the Vagrant repo: ``` $ bundle --binstubs exec ``` This will generate files in `exec/`, including `vagrant`. You can now specify the full path to the `exec/vagrant` anywhere on your operating system: ``` $ /path/to/vagrant/exec/vagrant init -m hashicorp/precise64 ``` Note that you *will* receive warnings that running Vagrant like this is not supported. It's true. It's not. You should listen to those warnings. If you do not want to specify the full path to Vagrant (i.e. you just want to run `vagrant`), you can create a symbolic link to your exec: ``` $ ln -sf /path/to/vagrant/exec/vagrant /usr/local/bin/vagrant ``` When you want to switch back to the official Vagrant version, simply remove the symlink.
programming_docs
vagrant Upgrading Vagrant Upgrading Vagrant ================== If you are upgrading from Vagrant 1.0.x, please read the [specific page dedicated to that](upgrading-from-1-0). This page covers upgrading Vagrant in general during the 1.x series. Vagrant upgrades during the 1.x release series are straightforward: 1. [Download](https://www.vagrantup.com/downloads.html) the new package 2. Install it over the existing package The installers will properly overwrite and remove old files. It is recommended that no other Vagrant processes are running during the upgrade process. Note that Vagrantfile stability for the new Vagrantfile syntax is not promised until 2.0 final. So while Vagrantfiles made for 1.0.x will [continue to work](backwards-compatibility), newer Vagrantfiles may have backwards incompatible changes until 2.0 final. > **Run into troubles upgrading?** Please [report an issue](https://github.com/hashicorp/vagrant/issues) if you run into problems upgrading. Upgrades are meant to be a smooth process and we consider it a bug if it was not. > > vagrant SSH Config SSH Config =========== **Command: `vagrant ssh-config [name|id]`** This will output valid configuration for an SSH config file to SSH into the running Vagrant machine from `ssh` directly (instead of using `vagrant ssh`). Options -------- * [`--host NAME`](#host-name) - Name of the host for the outputted configuration. vagrant Resume Resume ======= **Command: `vagrant resume [name|id]`** This resumes a Vagrant managed machine that was previously suspended, perhaps with the [suspend command](suspend). The configured provisioners will not run again, by default. You can force the provisioners to re-run by specifying the `--provision` flag. Options ======== * [`--provision`](#provision) - Force the provisioners to run. * [`--provision-with x,y,z`](#provision-with-x-y-z) - This will only run the given provisioners. For example, if you have a `:shell` and `:chef_solo` provisioner and run `vagrant provision --provision-with shell`, only the shell provisioner will be run. vagrant PowerShell PowerShell =========== **Command: `vagrant powershell`** This will open a PowerShell prompt on the host into a running Vagrant guest machine. This command will only work if the machines supports PowerShell. Not every environment will support PowerShell. At the moment, only Windows is supported with this command. Options -------- * [`-c COMMAND`](#c-command) or `--command COMMAND` - This executes a single PowerShell command, prints out the stdout and stderr, and exits. vagrant Status Status ======= **Command: `vagrant status [name|id]`** This will tell you the state of the machines Vagrant is managing. It is quite easy, especially once you get comfortable with Vagrant, to forget whether your Vagrant machine is running, suspended, not created, etc. This command tells you the state of the underlying guest machine. vagrant Init Init ===== **Command: `vagrant init [name [url]]`** This initializes the current directory to be a Vagrant environment by creating an initial [Vagrantfile](../vagrantfile/index) if one does not already exist. If a first argument is given, it will prepopulate the `config.vm.box` setting in the created Vagrantfile. If a second argument is given, it will prepopulate the `config.vm.box_url` setting in the created Vagrantfile. Options -------- * [`--box-version`](#box-version) - (Optional) The box version or box version constraint to add to the `Vagrantfile`. * [`--force`](#force) - If specified, this command will overwrite any existing `Vagrantfile`. * [`--minimal`](#minimal) - If specified, a minimal Vagrantfile will be created. This Vagrantfile does not contain the instructional comments that the normal Vagrantfile contains. * [`--output FILE`](#output-file) - This will output the Vagrantfile to the given file. If this is "-", the Vagrantfile will be sent to stdout. * [`--template FILE`](#template-file) - Provide a custom ERB template for generating the Vagrantfile. Examples --------- Create a base Vagrantfile: ``` $ vagrant init hashicorp/precise64 ``` Create a minimal Vagrantfile (no comments or helpers): ``` $ vagrant init -m hashicorp/precise64 ``` Create a new Vagrantfile, overwriting the one at the current path: ``` $ vagrant init -f hashicorp/precise64 ``` Create a Vagrantfile with the specific box, from the specific box URL: ``` $ vagrant init my-company-box https://boxes.company.com/my-company.box ``` Create a Vagrantfile, locking the box to a version constraint: ``` $ vagrant init --box-version '> 0.1.5' hashicorp/precise64 ``` vagrant Command-Line Interface Command-Line Interface ======================= Almost all interaction with Vagrant is done through the command-line interface. The interface is available using the `vagrant` command, and comes installed with Vagrant automatically. The `vagrant` command in turn has many subcommands, such as `vagrant up`, `vagrant destroy`, etc. If you run `vagrant` by itself, help will be displayed showing all available subcommands. In addition to this, you can run any Vagrant command with the `-h` flag to output help about that specific command. For example, try running `vagrant init -h`. The help will output a one sentence synopsis of what the command does as well as a list of all the flags the command accepts. In depth documentation and use cases of various Vagrant commands is available by reading the appropriate sub-section available in the left navigational area of this site. You may also wish to consult the [documentation](../other/environmental-variables) regarding the environmental variables that can be used to configure and control Vagrant in a global way. vagrant Global Status Global Status ============== **Command: `vagrant global-status`** This command will tell you the state of all active Vagrant environments on the system for the currently logged in user. > **This command does not actively verify the state of machines**, and is instead based on a cache. Because of this, it is possible to see stale results (machines say they're running but they're not). For example, if you restart your computer, Vagrant would not know. To prune the invalid entries, run global status with the `--prune` flag. > > The IDs in the output that look like `a1b2c3` can be used to control the Vagrant machine from anywhere on the system. Any Vagrant command that takes a target machine (such as `up`, `halt`, `destroy`) can be used with this ID to control it. For example: `vagrant destroy a1b2c3`. Options -------- * [`--prune`](#prune) - Prunes invalid entries from the list. This is much more time consuming than simply listing the entries. Environment Not Showing Up --------------------------- If your environment is not showing up, you may have to do a `vagrant destroy` followed by a `vagrant up`. If you just upgraded from a previous version of Vagrant, existing environments will not show up in global-status until they are destroyed and recreated. vagrant More Commands More Commands ============== In addition to the commands listed in the sidebar and shown in `vagrant -h`, Vagrant comes with some more commands that are hidden from basic help output. These commands are hidden because they're not useful to beginners or they're not commonly used. We call these commands "non-primary subcommands". You can view all subcommands, including the non-primary subcommands, by running `vagrant list-commands`, which itself is a non-primary subcommand! Note that while you have to run a special command to list the non-primary subcommands, you do not have to do anything special to actually *run* the non-primary subcommands. They're executed just like any other subcommand: `vagrant COMMAND`. The list of non-primary commands is below. Click on any command to learn more about it. * [docker-exec](../docker/commands) * [docker-logs](../docker/commands) * [docker-run](../docker/commands) * <rsync> * <rsync-auto> vagrant Rsync Rsync ====== **Command: `vagrant rsync`** This command forces a re-sync of any [rsync synced folders](../synced-folders/rsync). Note that if you change any settings within the rsync synced folders such as exclude paths, you will need to `vagrant reload` before this command will pick up those changes. vagrant rsync-auto rsync-auto =========== **Command: `vagrant rsync-auto`** This command watches all local directories of any [rsync synced folders](../synced-folders/rsync) and automatically initiates an rsync transfer when changes are detected. This command does not exit until an interrupt is received. The change detection is optimized to use platform-specific APIs to listen for filesystem changes, and does not simply poll the directory. Options -------- * [`--[no-]poll`](#no-poll) - Force Vagrant to watch for changes using filesystem polling instead of filesystem events. This is required for some filesystems that do not support events. Warning: enabling this will make `rsync-auto` *much* slower. By default, polling is disabled. Machine State Changes ---------------------- The `rsync-auto` command does not currently handle machine state changes gracefully. For example, if you start the `rsync-auto` command, then halt the guest machine, then make changes to some files, then boot it back up, `rsync-auto` will not attempt to resync. To ensure that the command works properly, you should start `rsync-auto` only when the machine is running, and shut it down before any machine state changes. You can always force a resync with the <rsync> command. Vagrantfile Changes -------------------- If you change or move your Vagrantfile, the `rsync-auto` command will have to be restarted. For example, if you add synced folders to the Vagrantfile, or move the directory that contains the Vagrantfile, the `rsync-auto` command will either not pick up the changes or may begin experiencing strange behavior. Before making any such changes, it is recommended that you turn off `rsync-auto`, then restart it afterwards. vagrant Port Port ===== **Command: `vagrant port [name|id]`** The port command displays the full list of guest ports mapped to the host machine ports: ``` $ vagrant port 22 (guest) => 2222 (host) 80 (guest) => 8080 (host) ``` In a multi-machine Vagrantfile, the name of the machine must be specified: ``` $ vagrant port my-machine ``` Options -------- * [`--guest PORT`](#guest-port) - This displays just the host port that corresponds to the given guest port. If the guest is not forwarding that port, an error is returned. This is useful for quick scripting, for example: ``` $ ssh -p $(vagrant port --guest 22) ``` * [`--machine-readable`](#machine-readable) - This tells Vagrant to display machine-readable output instead of the human-friendly output. More information is available in the [machine-readable output](machine-readable) documentation. vagrant Provision Provision ========== **Command: `vagrant provision [vm-name]`** Runs any configured [provisioners](../provisioning/index) against the running Vagrant managed machine. This command is a great way to quickly test any provisioners, and is especially useful for incremental development of shell scripts, Chef cookbooks, or Puppet modules. You can just make simple modifications to the provisioning scripts on your machine, run a `vagrant provision`, and check for the desired results. Rinse and repeat. Options ======== * [`--provision-with x,y,z`](#provision-with-x-y-z) - This will only run the given provisioners. For example, if you have a `:shell` and `:chef_solo` provisioner and run `vagrant provision --provision-with shell`, only the shell provisioner will be run. vagrant Package Package ======== **Command: `vagrant package [name|id]`** This packages a currently running *VirtualBox* or *Hyper-V* environment into a re-usable [box](../boxes). This command can only be used with other [providers](../providers/index) based on the provider implementation and if the provider supports it. Options -------- * [`--base NAME`](#base-name) - Instead of packaging a VirtualBox machine that Vagrant manages, this will package a VirtualBox machine that VirtualBox manages. `NAME` should be the name or UUID of the machine from the VirtualBox GUI. Currently this option is only available for VirtualBox. * [`--output NAME`](#output-name) - The resulting package will be saved as `NAME`. By default, it will be saved as `package.box`. * [`--include x,y,z`](#include-x-y-z) - Additional files will be packaged with the box. These can be used by a packaged Vagrantfile (documented below) to perform additional tasks. * [`--vagrantfile FILE`](#vagrantfile-file) - Packages a Vagrantfile with the box, that is loaded as part of the [Vagrantfile load order](../vagrantfile/index#load-order) when the resulting box is used. > **A common misconception** is that the `--vagrantfile` option will package a Vagrantfile that is used when `vagrant init` is used with this box. This is not the case. Instead, a Vagrantfile is loaded and read as part of the Vagrant load process when the box is used. For more information, read about the [Vagrantfile load order](../vagrantfile/index#load-order). > > vagrant Login Login ====== **Command: `vagrant login`** The login command is used to authenticate with the [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud) server. Logging in is only necessary if you are accessing protected boxes or using [Vagrant Share](../share/index). **Logging in is not a requirement to use Vagrant.** The vast majority of Vagrant does *not* require a login. Only certain features such as protected boxes or [Vagrant Share](../share/index) require a login. The reference of available command-line flags to this command is available below. Options -------- * [`--check`](#check) - This will check if you are logged in. In addition to outputting whether you are logged in or not, the command will have exit status 0 if you are logged in, and exit status 1 if you are not. * [`--logout`](#logout) - This will log you out if you are logged in. If you are already logged out, this command will do nothing. It is not an error to call this command if you are already logged out. * [`--token`](#token) - This will set the Vagrant Cloud login token manually to the provided string. It is assumed this token is a valid Vagrant Cloud access token. Examples --------- Securely authenticate to Vagrant Cloud using a username and password: ``` $ vagrant login # ... Vagrant Cloud username: Vagrant Cloud password: ``` Check if the current user is authenticated: ``` $ vagrant login --check You are already logged in. ``` Securely authenticate with Vagrant Cloud using a token: ``` $ vagrant login --token ABCD1234 The token was successfully saved. ``` vagrant Up Up === **Command: `vagrant up [name|id]`** This command creates and configures guest machines according to your [Vagrantfile](../vagrantfile/index). This is the single most important command in Vagrant, since it is how any Vagrant machine is created. Anyone using Vagrant must use this command on a day-to-day basis. Options -------- * [`name`](#name) - Name of machine defined in [Vagrantfile](../vagrantfile/index) * [`id`](#id) - Machine id found with `vagrant global-status`. Using `id` allows you to call `vagrant up id` from any directory. * [`--[no-]destroy-on-error`](#no-destroy-on-error) - Destroy the newly created machine if a fatal, unexpected error occurs. This will only happen on the first `vagrant up`. By default this is set. * [`--[no-]install-provider`](#no-install-provider) - If the requested provider is not installed, Vagrant will attempt to automatically install it if it can. By default this is enabled. * [`--[no-]parallel`](#no-parallel) - Bring multiple machines up in parallel if the provider supports it. Please consult the provider documentation to see if this feature is supported. * [`--provider x`](#provider-x) - Bring the machine up with the given [provider](../providers/index). By default this is "virtualbox". * [`--[no-]provision`](#no-provision) - Force, or prevent, the provisioners to run. * [`--provision-with x,y,z`](#provision-with-x-y-z) - This will only run the given provisioners. For example, if you have a `:shell` and `:chef_solo` provisioner and run `vagrant provision --provision-with shell`, only the shell provisioner will be run. vagrant Suspend Suspend ======== **Command: `vagrant suspend [name|id]`** This suspends the guest machine Vagrant is managing, rather than fully [shutting it down](halt) or [destroying it](destroy). A suspend effectively saves the *exact point-in-time state* of the machine, so that when you <resume> it later, it begins running immediately from that point, rather than doing a full boot. This generally requires extra disk space to store all the contents of the RAM within your guest machine, but the machine no longer consumes the RAM of your host machine or CPU cycles while it is suspended. vagrant Plugin Plugin ======= **Command: `vagrant plugin`** This is the command used to manage [plugins](../plugins/index). The main functionality of this command is exposed via another level of subcommands: * [`expunge`](#plugin-expunge) * [`install`](#plugin-install) * [`license`](#plugin-license) * [`list`](#plugin-list) * [`repair`](#plugin-repair) * [`uninstall`](#plugin-uninstall) * [`update`](#plugin-update) Plugin Expunge =============== **Command: `vagrant plugin expunge`** This removes all user installed plugin information. All plugin gems, their dependencies, and the `plugins.json` file are removed. This command provides a simple mechanism to fully remove all user installed custom plugins. When upgrading Vagrant it may be required to reinstall plugins due to an internal incompatibility. The expunge command can help make that process easier by attempting to automatically reinstall currently configured plugins: ``` # Delete all plugins and reinstall $ vagrant plugin expunge --reinstall ``` This command accepts optional command-line flags: * [`--force`](#force) - Do not prompt for confirmation prior to removal * [`--global-only`](#global-only) - Only expunge global plugins * [`--local`](#local) - Include plugins in local project * [`--local-only`](#local-only) - Only expunge local project plugins * [`--reinstall`](#reinstall) - Attempt to reinstall plugins after removal Plugin Install =============== **Command: `vagrant plugin install <name>...`** This installs a plugin with the given name or file path. If the name is not a path to a file, then the plugin is installed from remote repositories, usually [RubyGems](https://rubygems.org). This command will also update a plugin if it is already installed, but you can also use `vagrant plugin update` for that. ``` # Installing a plugin from a known gem source $ vagrant plugin install my-plugin # Installing a plugin from a local file source $ vagrant plugin install /path/to/my-plugin.gem ``` If multiple names are specified, multiple plugins will be installed. If flags are given below, the flags will apply to *all* plugins being installed by the current command invocation. If the plugin is already installed, this command will reinstall it with the latest version available. This command accepts optional command-line flags: * [`--entry-point ENTRYPOINT`](#entry-point-entrypoint) - By default, installed plugins are loaded internally by loading an initialization file of the same name as the plugin. Most of the time, this is correct. If the plugin you are installing has another entrypoint, this flag can be used to specify it. * [`--local`](#local-1) - Install plugin to the local Vagrant project only. * [`--plugin-clean-sources`](#plugin-clean-sources) - Clears all sources that have been defined so far. This is an advanced feature. The use case is primarily for corporate firewalls that prevent access to RubyGems.org. * [`--plugin-source SOURCE`](#plugin-source-source) - Adds a source from which to fetch a plugin. Note that this does not only affect the single plugin being installed, by all future plugin as well. This is a limitation of the underlying plugin installer Vagrant uses. * [`--plugin-version VERSION`](#plugin-version-version) - The version of the plugin to install. By default, this command will install the latest version. You can constrain the version using this flag. You can set it to a specific version, such as "1.2.3" or you can set it to a version constraint, such as "> 1.0.2". You can set it to a more complex constraint by comma-separating multiple constraints: "> 1.0.2, < 1.1.0" (do not forget to quote these on the command-line). Plugin License =============== **Command: `vagrant plugin license <name> <license-file>`** This command installs a license for a proprietary Vagrant plugin, such as the [VMware Fusion provider](../vmware). Plugin List ============ **Command: `vagrant plugin list`** This lists all installed plugins and their respective installed versions. If a version constraint was specified for a plugin when installing it, the constraint will be listed as well. Other plugin-specific information may be shown, too. This command accepts optional command-line flags: * [`--local`](#local-2) - Include local project plugins. Plugin Repair ============== Vagrant may fail to properly initialize user installed custom plugins. This can be caused my improper plugin installation/removal, or by manual manipulation of plugin related files like the `plugins.json` data file. Vagrant can attempt to automatically repair the problem. If automatic repair is not successful, refer to the [expunge](#plugin-expunge) command This command accepts optional command-line flags: * [`--local`](#local-3) - Repair local project plugins. Plugin Uninstall ================= **Command: `vagrant plugin uninstall <name> [<name2> <name3> ...]`** This uninstalls the plugin with the given name. Any dependencies of the plugin will also be uninstalled assuming no other plugin needs them. If multiple plugins are given, multiple plugins will be uninstalled. This command accepts optional command-line flags: * [`--local`](#local-4) - Uninstall plugin from local project. Plugin Update ============== **Command: `vagrant plugin update [<name>]`** This updates the plugins that are installed within Vagrant. If you specified version constraints when installing the plugin, this command will respect those constraints. If you want to change a version constraint, re-install the plugin using `vagrant plugin install`. If a name is specified, only that single plugin will be updated. If a name is specified of a plugin that is not installed, this command will not install it. This command accepts optional command-line flags: * [`--local`](#local-5) - Update plugin from local project.
programming_docs
vagrant Snapshot Snapshot ========= **Command: `vagrant snapshot`** This is the command used to manage snapshots with the guest machine. Snapshots record a point-in-time state of a guest machine. You can then quickly restore to this environment. This lets you experiment and try things and quickly restore back to a previous state. Snapshotting is not supported by every provider. If it is not supported, Vagrant will give you an error message. The main functionality of this command is exposed via even more subcommands: * [`push`](#snapshot-push) * [`pop`](#snapshot-pop) * [`save`](#snapshot-save) * [`restore`](#snapshot-restore) * [`list`](#snapshot-list) * [`delete`](#snapshot-delete) Snapshot Push ============== **Command: `vagrant snapshot push`** This takes a snapshot and pushes it onto the snapshot stack. This is a shorthand for `vagrant snapshot save` where you do not need to specify a name. When you call the inverse `vagrant snapshot pop`, it will restore the pushed state. > **Warning:** If you are using `push` and `pop`, avoid using `save` and `restore` which are unsafe to mix. > > Snapshot Pop ============= **Command: `vagrant snapshot pop`** This command is the inverse of `vagrant snapshot push`: it will restore the pushed state. Options -------- * [`--[no-]provision`](#no-provision) - Force the provisioners to run (or prevent them from doing so). * [`--no-delete`](#no-delete) - Prevents deletion of the snapshot after restoring (so that you can restore to the same point again later). Snapshot Save ============== **Command: `vagrant snapshot save [vm-name] NAME`** This command saves a new named snapshot. If this command is used, the `push` and `pop` subcommands cannot be safely used. Snapshot Restore ================= **Command: `vagrant snapshot restore [vm-name] NAME`** This command restores the named snapshot. * [`--[no-]provision`](#no-provision-1) - Force the provisioners to run (or prevent them from doing so). Snapshot List ============== **Command: `vagrant snapshot list`** This command will list all the snapshots taken. Snapshot Delete ================ **Command: `vagrant snapshot delete NAME`** This command will delete the named snapshot. Some providers require all "child" snapshots to be deleted first. Vagrant itself does not track what these children are. If this is the case (such as with VirtualBox), then you must be sure to delete the snapshots in the reverse order they were taken. This command is typically *much faster* if the machine is halted prior to snapshotting. If this is not an option, or is not ideal, then the deletion can also be done online with most providers. vagrant Cloud Cloud ====== **Command: `vagrant cloud`** This is the command used to manage anything related to [Vagrant Cloud](https://vagrantcloud.com). The main functionality of this command is exposed via subcommands: * [`auth`](#cloud-auth) * [`box`](#cloud-box) * [`provider`](#cloud-provider) * [`publish`](#cloud-publish) * [`search`](#cloud-search) * [`version`](#cloud-version) Cloud Auth =========== **Command: `vagrant cloud auth`** The `cloud auth` command is for handling all things related to authorization with Vagrant Cloud. * [`login`](#cloud-auth-login) * [`logout`](#cloud-auth-logout) * [`whoami`](#cloud-auth-whoami) Cloud Auth Login ----------------- **Command: `vagrant cloud auth login`** The login command is used to authenticate with [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud) server. Logging in is only necessary if you are accessing protected boxes. **Logging in is not a requirement to use Vagrant.** The vast majority of Vagrant does *not* require a login. Only certain features such as protected boxes. The reference of available command-line flags to this command is available below. ### Options * [`--check`](#check) - This will check if you are logged in. In addition to outputting whether you are logged in or not, the command exit status will be 0 if you are logged in, or 1 if you are not. * [`--logout`](#logout) - This will log you out if you are logged in. If you are already logged out, this command will do nothing. It is not an error to call this command if you are already logged out. * [`--token`](#token) - This will set the Vagrant Cloud login token manually to the provided string. It is assumed this token is a valid Vagrant Cloud access token. ### Examples Securely authenticate to Vagrant Cloud using a username and password: ``` $ vagrant cloud auth login # ... Vagrant Cloud username: Vagrant Cloud password: ``` Check if the current user is authenticated: ``` $ vagrant cloud auth login --check You are already logged in. ``` Securely authenticate with Vagrant Cloud using a token: ``` $ vagrant cloud auth login --token ABCD1234 The token was successfully saved. ``` Cloud Auth Logout ------------------ **Command: `vagrant cloud auth logout`** This will log you out if you are logged in. If you are already logged out, this command will do nothing. It is not an error to call this command if you are already logged out. Cloud Auth Whomi ----------------- **Command: `vagrant cloud auth whoami [TOKEN]`** This command will validate your Vagrant Cloud token and will print the user who it belongs to. If a token is passed in, it will attempt to validate it instead of the token stored stored on disk. Cloud Box ========== **Command: `vagrant cloud box`** The `cloud box` command is used to manage life cycle operations for all `box` entities on Vagrant Cloud. * [`create`](#cloud-box-create) * [`delete`](#cloud-box-delete) * [`show`](#cloud-box-show) * [`update`](#cloud-box-update) Cloud Box Create ----------------- **Command: `vagrant cloud box create ORGANIZATION/BOX-NAME`** The box create command is used to create a new box entry on Vagrant Cloud. ### Options * [`--description DESCRIPTION`](#description-description) - A full description of the box. Can be formatted with Markdown. * [`--short-description DESCRIPTION`](#short-description-description) - A short summary of the box. * [`--private`](#private) - Will make the new box private (Public by default) Cloud Box Delete ----------------- **Command: `vagrant cloud box delete ORGANIZATION/BOX-NAME`** The box delete command will *permanently* delete the given box entry on Vagrant Cloud. Before making the request, it will ask if you are sure you want to delete the box. Cloud Box Show --------------- **Command: `vagrant cloud box show ORGANIZATION/BOX-NAME`** The box show command will display information about the latest version for the given Vagrant box. Cloud Box Update ----------------- **Command: `vagrant cloud box update ORGANIZATION/BOX-NAME`** The box update command will update an already created box on Vagrant Cloud with the given options. ### Options * [`--description DESCRIPTION`](#description-description-1) - A full description of the box. Can be formatted with Markdown. * [`--short-description DESCRIPTION`](#short-description-description-1) - A short summary of the box. * [`--private`](#private-1) - Will make the new box private (Public by default) Cloud Provider =============== **Command: `vagrant cloud provider`** The `cloud provider` command is used to manage the life cycle operations for all `provider` entities on Vagrant Cloud. * [`create`](#cloud-provider-create) * [`delete`](#cloud-provider-delete) * [`update`](#cloud-provider-update) * [`upload`](#cloud-provider-upload) Cloud Provider Create ---------------------- **Command: `vagrant cloud provider create ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION [URL]`** The provider create command is used to create a new provider entry on Vagrant Cloud. The `url` argument is expected to be a remote URL that Vagrant Cloud can use to download the provider. If no `url` is specified, the provider entry can be updated later with a url or the [upload](#cloud-provider-upload) command can be used to upload a Vagrant [box file](../boxes). Cloud Provider Delete ---------------------- **Command: `vagrant cloud provider delete ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION`** The provider delete command is used to delete a provider entry on Vagrant Cloud. Before making the request, it will ask if you are sure you want to delete the provider. Cloud Provider Update ---------------------- **Command: `vagrant cloud provider update ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION [URL]`** The provider update command will update an already created provider for a box on Vagrant Cloud with the given options. Cloud Provider Upload ---------------------- **Command: `vagrant cloud provider upload ORGANIZATION/BOX-NAME PROVIDER-NAME VERSION BOX-FILE`** The provider upload command will upload a Vagrant [box file](../boxes) to Vagrant Cloud for the specified version and provider. Cloud Publish ============== **Command: `vagrant cloud publish ORGANIZATION/BOX-NAME VERSION PROVIDER-NAME [PROVIDER-FILE]`** The publish command is a complete solution for creating and updating a Vagrant box on Vagrant Cloud. Instead of having to create each attribute of a Vagrant box with separate commands, the publish command instead asks you to provide all the information required before creating or updating a new box. Options -------- * [`--box-version VERSION`](#box-version-version) - Version to create for the box * [`--description DESCRIPTION`](#description-description-2) - A full description of the box. Can be formatted with Markdown. * [`--force`](#force) - Disables confirmation when creating or updating a box. * [`--short-description DESCRIPTION`](#short-description-description-2) - A short summary of the box. * [`--private`](#private-2) - Will make the new box private (Public by default) * [`--release`](#release) - Automatically releases the box after creation (Unreleased by default) * [`--url`](#url) - Valid remote URL to download the box file * [`--version-description DESCRIPTION`](#version-description-description) - Description of the version that will be created. Examples --------- Creating a new box on Vagrant Cloud: ``` $ vagrant cloud publish briancain/supertest 1.0.0 virtualbox boxes/my/virtualbox.box -d "A really cool box to download and use" --version-description "A cool version" --release --short-description "Donwload me!" You are about to create a box on Vagrant Cloud with the following options: briancain/supertest (1.0.0) for virtualbox Automatic Release: true Box Description: A really cool box to download and use Box Short Description: Download me! Version Description: A cool version Do you wish to continue? [y/N] y Creating a box entry... Creating a version entry... Creating a provider entry... Uploading provider with file /Users/vagrant/boxes/my/virtualbox.box Releasing box... Complete! Published briancain/supertest tag: briancain/supertest username: briancain name: supertest private: false downloads: 0 created_at: 2018-07-25T17:53:04.340Z updated_at: 2018-07-25T18:01:10.665Z short_description: Download me! description_markdown: A reall cool box to download and use current_version: 1.0.0 providers: virtualbox ``` Cloud Search ============= **Command: `vagrant cloud search QUERY`** The cloud search command will take a query and search Vagrant Cloud for any matching Vagrant boxes. Various filters can be applied to the results. Options -------- * [`--json`](#json) - Format search results in JSON. * [`--page PAGE`](#page-page) - The page to display. Defaults to the first page of results. * [`--short`](#short) - Shows a simple list of box names for the results. * [`--order ORDER`](#order-order) - Order to display results. Can either be `desc` or `asc`. Defaults to `desc`. * [`--limit LIMIT`](#limit-limit) - Max number of search results to display. Defaults to 25. * [`--provider PROVIDER`](#provider-provider) - Filter search results to a single provider. * [`--sort-by SORT`](#sort-by-sort) - The field to sort results on. Can be `created`, `downloads` , or `updated`. Defaults to `downloads`. Examples --------- If you are looking for a HashiCorp box: ``` vagrant cloud search hashicorp --limit 5 | NAME | VERSION | DOWNLOADS | PROVIDERS | +-------------------------+---------+-----------+---------------------------------+ | hashicorp/precise64 | 1.1.0 | 6,675,725 | virtualbox,vmware_fusion,hyperv | | hashicorp/precise32 | 1.0.0 | 2,261,377 | virtualbox | | hashicorp/boot2docker | 1.7.8 | 59,284 | vmware_desktop,virtualbox | | hashicorp/connect-vm | 0.1.0 | 6,912 | vmware_desktop,virtualbox | | hashicorp/vagrant-share | 0.1.0 | 3,488 | vmware_desktop,virtualbox | +-------------------------+---------+-----------+---------------------------------+ ``` Cloud Version ============== **Command: `vagrant cloud version`** The `cloud version` command is used to manage life cycle operations for all `version` entities for a box on Vagrant Cloud. * [`create`](#cloud-version-create) * [`delete`](#cloud-version-delete) * [`release`](#cloud-version-release) * [`revoke`](#cloud-version-revoke) * [`update`](#cloud-version-update) Cloud Version Create --------------------- **Command: `vagrant cloud version create ORGANIZATION/BOX-NAME VERSION`** The cloud create command creates a version entry for a box on Vagrant Cloud. ### Options * [`--description DESCRIPTION`](#description-description-3) - Description of the version that will be created. Cloud Version Delete --------------------- **Command: `vagrant cloud version delete ORGANIZATION/BOX-NAME VERSION`** The cloud delete command deletes a version entry for a box on Vagrant Cloud. Before making the request, it will ask if you are sure you want to delete the version. Cloud Version Release ---------------------- **Command: `vagrant cloud version release ORGANIZATION/BOX-NAME VERSION`** The cloud release command releases a version entry for a box on Vagrant Cloud if it already exists. Before making the request, it will ask if you are sure you want to release the version. Cloud Version Revoke --------------------- **Command: `vagrant cloud version revoke ORGANIZATION/BOX-NAME VERSION`** The cloud revoke command revokes a version entry for a box on Vagrant Cloud if it already exists. Before making the request, it will ask if you are sure you want to revoke the version. Cloud Version Update --------------------- **Command: `vagrant cloud version update ORGANIZATION/BOX-NAME VERSION`** ### Options * [`--description DESCRIPTION`](#description-description-4) - Description of the version that will be created. vagrant Destroy Destroy ======== **Command: `vagrant destroy [name|id]`** This command stops the running machine Vagrant is managing and destroys all resources that were created during the machine creation process. After running this command, your computer should be left at a clean state, as if you never created the guest machine in the first place. For linux-based guests, Vagrant uses the `shutdown` command to gracefully terminate the machine. Due to the varying nature of operating systems, the `shutdown` command may exist at many different locations in the guest's `$PATH`. It is the guest machine's responsibility to properly populate the `$PATH` with directory containing the `shutdown` command. Options -------- * [`-f`](#f) or `--force` - Do not ask for confirmation before destroying. * [`--[no-]parallel`](#no-parallel) - Destroys multiple machines in parallel if the provider supports it. Please consult the provider documentation to see if this feature is supported. > The `destroy` command does not remove a box that may have been installed on your computer during `vagrant up`. Thus, even if you run `vagrant destroy`, the box installed in the system will still be present on the hard drive. To return your computer to the state as it was before `vagrant up` command, you need to use `vagrant box remove`. > > For more information, read about the [`vagrant box remove`](box) command. > > vagrant Validate Validate ========= **Command: `vagrant validate`** This command validates your [Vagrantfile](../vagrantfile/index). Examples --------- ``` $ vagrant validate Vagrantfile validated successfully. ``` vagrant Reload Reload ======= **Command: `vagrant reload [name|id]`** The equivalent of running a <halt> followed by an <up>. This command is usually required for changes made in the Vagrantfile to take effect. After making any modifications to the Vagrantfile, a `reload` should be called. The configured provisioners will not run again, by default. You can force the provisioners to re-run by specifying the `--provision` flag. Options ======== * [`--provision`](#provision) - Force the provisioners to run. * [`--provision-with x,y,z`](#provision-with-x-y-z) - This will only run the given provisioners. For example, if you have a `:shell` and `:chef_solo` provisioner and run `vagrant reload --provision-with shell`, only the shell provisioner will be run. vagrant RDP RDP ==== **Command: `vagrant rdp`** This will start an RDP client for a remote desktop session with the guest. This only works for Vagrant environments that support remote desktop, which is typically only Windows. Raw Arguments -------------- You can pass raw arguments through to your RDP client on the command-line by appending it after a `--`. Vagrant just passes these through. For example: ``` $ vagrant rdp -- /span ``` The above command on Windows will execute `mstsc.exe /span config.rdp`, allowing your RDP to span multiple desktops. On Darwin hosts, such as Mac OS X, the additional arguments are added to the generated RDP configuration file. Since these files can contain multiple options with different spacing, you *must* quote multiple arguments. For example: ``` $ vagrant rdp -- "screen mode id:i:0" "other config:s:value" ``` Note that as of the publishing of this guide, the Microsoft RDP Client for Mac does *not* perform validation on the configuration file. This means if you specify an invalid configuration option or make a typographical error, the client will silently ignore the error and continue! vagrant Aliases Aliases ======== Inspired in part by Git's own [alias functionality](https://git-scm.com/book/en/v2/Git-Basics-Git-Aliases), aliases make your Vagrant experience simpler, easier, and more familiar by allowing you to create your own custom Vagrant commands. Aliases can be defined within `VAGRANT_HOME/aliases` file, or in a custom file defined using the `VAGRANT_ALIAS_FILE` environment variable, in the following format: ``` # basic command-level aliases start = up stop = halt # advanced command-line aliases eradicate = !vagrant destroy && rm -rf .vagrant ``` In a nutshell, aliases are defined using a standard `key = value` format, where the `key` is the new Vagrant command, and the `value` is the aliased command. Using this format, there are two types of aliases that can be defined: internal and external aliases. Internal Aliases ----------------- Internal command aliases call the CLI class directly, allowing you to alias one Vagrant command to another Vagrant command. This technique can be very useful for creating commands that you think *should* exist. For example, if `vagrant stop` feels more intuitive than `vagrant halt`, the following alias definitions would make that change possible: ``` stop = halt ``` This makes the following commands equivalent: ``` vagrant stop vagrant halt ``` External Aliases ----------------- While internal aliases can be used to define more intuitive Vagrant commands, external command aliases are used to define Vagrant commands with brand new functionality. These aliases are prefixed with the `!` character, which indicates to the interpreter that the alias should be executed as a shell command. For example, let's say that you want to be able to view the processor and memory utilization of the active project's virtual machine. To do this, you could define a `vagrant metrics` command that returns the required information in an easy-to-read format, like so: ``` metrics = !ps aux | grep "[V]BoxHeadless" | grep $(cat .vagrant/machines/default/virtualbox/id) | awk '{ printf("CPU: %.02f%%, Memory: %.02f%%", $3, $4) }' ``` The above alias, from within the context of an active Vagrant project, would print the CPU and memory utilization directly to the console: ``` CPU: 4.20%, Memory: 11.00% ```
programming_docs
vagrant Machine Readable Output Machine Readable Output ======================== Every Vagrant command accepts a `--machine-readable` flag which enables machine readable output mode. In this mode, the output to the terminal is replaced with machine-friendly output. This mode makes it easy to programmatically execute Vagrant and read data out of it. This output format is protected by our [backwards compatibility](../installation/backwards-compatibility) policy. Until Vagrant 2.0 is released, however, the machine readable output may change as we determine more use cases for it. But the backwards compatibility promise should make it safe to write client libraries to parse the output format. > **Advanced topic!** This is an advanced topic for use only if you want to programmatically execute Vagrant. If you are just getting started with Vagrant, you may safely skip this section. > > Work-In-Progress ----------------- The machine-readable output is very new (released as part of Vagrant 1.4). We're still gathering use cases for it and building up the output for each of the commands. It is likely that what you may want to achieve with the machine-readable output is not possible due to missing information. In this case, we ask that you please [open an issue](https://github.com/hashicorp/vagrant/issues) requesting that certain information become available. We will most likely add it! Format ------- The machine readable format is a line-oriented, comma-delimited text format. This makes it extremely easy to parse using standard Unix tools such as awk or grep in addition to full programming languages like Ruby or Python. The format is: ``` timestamp,target,type,data... ``` Each component is explained below: * **timestamp** is a Unix timestamp in UTC of when the message was printed. * **target** is the target of the following output. This is empty if the message is related to Vagrant globally. Otherwise, this is generally a machine name so you can relate output to a specific machine when multi-VM is in use. * **type** is the type of machine-readable message being outputted. There are a set of standard types which are covered later. * **data** is zero or more comma-separated values associated with the prior type. The exact amount and meaning of this data is type-dependent, so you must read the documentation associated with the type to understand fully. Within the format, if data contains a comma, it is replaced with `%!(VAGRANT_COMMA)`. This was preferred over an escape character such as \' because it is more friendly to tools like awk. Newlines within the format are replaced with their respective standard escape sequence. Newlines become a literal `\n` within the output. Carriage returns become a literal `\r`. Types ------ This section documents all the available types that may be outputted with the machine-readable output. | Type | Description | | --- | --- | | box-name | Name of a box installed into Vagrant. | | box-provider | Provider for an installed box. | | cli-command | A subcommand of `vagrant` that is available. | | error-exit | An error occurred that caused Vagrant to exit. This contains that error. Contains two data elements: type of error, error message. | | provider-name | The provider name of the target machine. targeted | | ssh-config | The OpenSSH compatible SSH config for a machine. This is usually the result of the "ssh-config" command. targeted | | state | The state ID of the target machine. targeted | | state-human-long | Human-readable description of the state of the machine. This is the long version, and may be a paragraph or longer. targeted | | state-human-short | Human-readable description of the state of the machine. This is the short version, limited to at most a sentence. targeted | vagrant Halt Halt ===== **Command: `vagrant halt [name|id]`** This command shuts down the running machine Vagrant is managing. Vagrant will first attempt to gracefully shut down the machine by running the guest OS shutdown mechanism. If this fails, or if the `--force` flag is specified, Vagrant will effectively just shut off power to the machine. For linux-based guests, Vagrant uses the `shutdown` command to gracefully terminate the machine. Due to the varying nature of operating systems, the `shutdown` command may exist at many different locations in the guest's `$PATH`. It is the guest machine's responsibility to properly populate the `$PATH` with directory containing the `shutdown` command. Options -------- * [`-f`](#f) or `--force` - Do not attempt to gracefully shut down the machine. This effectively pulls the power on the guest machine. vagrant Box Box ==== **Command: `vagrant box`** This is the command used to manage (add, remove, etc.) [boxes](../boxes). The main functionality of this command is exposed via even more subcommands: * [`add`](#box-add) * [`list`](#box-list) * [`outdated`](#box-outdated) * [`prune`](#box-prune) * [`remove`](#box-remove) * [`repackage`](#box-repackage) * [`update`](#box-update) Box Add ======== **Command: `vagrant box add ADDRESS`** This adds a box with the given address to Vagrant. The address can be one of three things: * A shorthand name from the [public catalog of available Vagrant images](https://vagrantcloud.com/boxes/search), such as "hashicorp/precise64". * File path or HTTP URL to a box in a [catalog](https://vagrantcloud.com/boxes/search). For HTTP, basic authentication is supported and `http_proxy` environmental variables are respected. HTTPS is also supported. * URL directly a box file. In this case, you must specify a `--name` flag (see below) and versioning/updates will not work. If an error occurs during the download or the download is interrupted with a Ctrl-C, then Vagrant will attempt to resume the download the next time it is requested. Vagrant will only attempt to resume a download for 24 hours after the initial download. Options -------- * [`--box-version VALUE`](#box-version-value) - The version of the box you want to add. By default, the latest version will be added. The value of this can be an exact version number such as "1.2.3" or it can be a set of version constraints. A version constraint looks like ">= 1.0, < 2.0". * [`--cacert CERTFILE`](#cacert-certfile) - The certificate for the CA used to verify the peer. This should be used if the remote end does not use a standard root CA. * [`--capath CERTDIR`](#capath-certdir) - The certificate directory for the CA used to verify the peer. This should be used if the remote end does not use a standard root CA. * [`--cert CERTFILE`](#cert-certfile) - A client certificate to use when downloading the box, if necessary. * [`--clean`](#clean) - If given, Vagrant will remove any old temporary files from prior downloads of the same URL. This is useful if you do not want Vagrant to resume a download from a previous point, perhaps because the contents changed. * [`--force`](#force) - When present, the box will be downloaded and overwrite any existing box with this name. * [`--insecure`](#insecure) - When present, SSL certificates will not be verified if the URL is an HTTPS URL. * [`--provider PROVIDER`](#provider-provider) - If given, Vagrant will verify the box you are adding is for the given provider. By default, Vagrant automatically detects the proper provider to use. Options for direct box files ----------------------------- The options below only apply if you are adding a box file directly (when you are not using a catalog). * [`--checksum VALUE`](#checksum-value) - A checksum for the box that is downloaded. If specified, Vagrant will compare this checksum to what is actually downloaded and will error if the checksums do not match. This is highly recommended since box files are so large. If this is specified, `--checksum-type` must also be specified. If you are downloading from a catalog, the checksum is included within the catalog entry. * [`--checksum-type TYPE`](#checksum-type-type) - The type of checksum that `--checksum` is if it is specified. Supported values are currently "md5", "sha1", and "sha256". * [`--name VALUE`](#name-value) - Logical name for the box. This is the value that you would put into `config.vm.box` in your Vagrantfile. When adding a box from a catalog, the name is included in the catalog entry and does not have to be specified. > **Checksums for versioned boxes or boxes from HashiCorp's Vagrant Cloud:** For boxes from HashiCorp's Vagrant Cloud, the checksums are embedded in the metadata of the box. The metadata itself is served over TLS and its format is validated. > > Box List ========= **Command: `vagrant box list`** This command lists all the boxes that are installed into Vagrant. Box Outdated ============= **Command: `vagrant box outdated`** This command tells you whether or not the box you are using in your current Vagrant environment is outdated. If the `--global` flag is present, every installed box will be checked for updates. Checking for updates involves refreshing the metadata associated with a box. This generally requires an internet connection. Options -------- * [`--global`](#global) - Check for updates for all installed boxes, not just the boxes for the current Vagrant environment. Box Prune ========== **Command: `vagrant box prune`** This command removes old versions of installed boxes. If the box is currently in use vagrant will ask for confirmation. Options -------- * [`--provider PROVIDER`](#provider-provider-1) - The specific provider type for the boxes to destroy. * [`--dry-run`](#dry-run) - Only print the boxes that would be removed. * [`--name NAME`](#name-name) - The specific box name to check for outdated versions. * [`--force`](#force-1) - Destroy without confirmation even when box is in use. Box Remove =========== **Command: `vagrant box remove NAME`** This command removes a box from Vagrant that matches the given name. If a box has multiple providers, the exact provider must be specified with the `--provider` flag. If a box has multiple versions, you can select what versions to delete with the `--box-version` flag or remove all versions with the `--all` flag. Options -------- * [`--box-version VALUE`](#box-version-value-1) - Version of version constraints of the boxes to remove. See documentation on this flag for `box add` for more details. * [`--all`](#all) - Remove all available versions of a box. * [`--force`](#force-2) - Forces removing the box even if an active Vagrant environment is using it. * [`--provider VALUE`](#provider-value) - The provider-specific box to remove with the given name. This is only required if a box is backed by multiple providers. If there is only a single provider, Vagrant will default to removing it. Box Repackage ============== **Command: `vagrant box repackage NAME PROVIDER VERSION`** This command repackages the given box and puts it in the current directory so you can redistribute it. The name, provider, and version of the box can be retrieved using `vagrant box list`. When you add a box, Vagrant unpacks it and stores it internally. The original `*.box` file is not preserved. This command is useful for reclaiming a `*.box` file from an installed Vagrant box. Box Update =========== **Command: `vagrant box update`** This command updates the box for the current Vagrant environment if there are updates available. The command can also update a specific box (outside of an active Vagrant environment), by specifying the `--box` flag. *Note that updating the box will not update an already-running Vagrant machine. To reflect the changes in the box, you will have to destroy and bring back up the Vagrant machine.* If you just want to check if there are updates available, use the `vagrant box outdated` command. Options -------- * [`--box VALUE`](#box-value) - Name of a specific box to update. If this flag is not specified, Vagrant will update the boxes for the active Vagrant environment. * [`--provider VALUE`](#provider-value-1) - When `--box` is present, this controls what provider-specific box to update. This is not required unless the box has multiple providers. Without the `--box` flag, this has no effect. vagrant Version Version ======== **Command: `vagrant version`** This command tells you the version of Vagrant you have installed as well as the latest version of Vagrant that is currently available. In order to determine the latest available Vagrant version, this command must make a network call. If you only want to see the currently installed version, use `vagrant --version`. vagrant Connect Connect ======== **Command: `vagrant connect NAME`** The connect command complements the [share command](share) by enabling access to shared environments. You can learn about all the details of Vagrant Share in the [Vagrant Share section](../share/index). The reference of available command-line flags to this command is available below. Options -------- * [`--disable-static-ip`](#disable-static-ip) - The connect command will not spin up a small virtual machine to create a static IP you can access. When this flag is set, the only way to access the connection is to use the SOCKS proxy address outputted. * [`--static-ip IP`](#static-ip-ip) - Tells connect what static IP address to use for the virtual machine. By default, Vagrant connect will use an IP address that looks available in the 172.16.0.0/16 space. * [`--ssh`](#ssh) - Connects via SSH to an environment shared with `vagrant share --ssh`. vagrant Share Share ====== **Command: `vagrant share`** The share command initializes a Vagrant Share session, allowing you to share your Vagrant environment with anyone in the world, enabling collaboration directly in your Vagrant environment in almost any network environment. You can learn about all the details of Vagrant Share in the [Vagrant Share section](../share/index). The reference of available command-line flags to this command is available below. Options -------- * [`--disable-http`](#disable-http) - Disables the creation of a publicly accessible HTTP endpoint to your Vagrant environment. With this set, the only way to access your share is with `vagrant connect`. * [`--http PORT`](#http-port) - The port of the HTTP server running in the Vagrant environment. By default, Vagrant will attempt to find this for you. This has no effect if `--disable-http` is set. * [`--https PORT`](#https-port) - The port of an HTTPS server running in the Vagrant environment. By default, Vagrant will attempt to find this for you. This has no effect if `--disable-http` is set. * [`--ssh`](#ssh) - Enables SSH sharing (more information below). By default, this is not enabled. * [`--ssh-no-password`](#ssh-no-password) - Disables the encryption of the SSH keypair created when SSH sharing is enabled. * [`--ssh-port PORT`](#ssh-port-port) - The port of the SSH server running in the Vagrant environment. By default, Vagrant will attempt to find this for you. * [`--ssh-once`](#ssh-once) - Allows SSH access only once. After the first attempt to connect via SSH to the Vagrant environment, the generated keypair is destroyed. vagrant SSH SSH ==== **Command: `vagrant ssh [name|id] [-- extra_ssh_args]`** This will SSH into a running Vagrant machine and give you access to a shell. On a simple vagrant project, the instance created will be named default. Vagrant will ssh into this instance without the instance name: ``` $ vagrant ssh Welcome to your Vagrant-built virtual machine. Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2 $ logout Connection to 127.0.0.1 closed. ``` Or you could use the name: ``` $ vagrant ssh default Welcome to your Vagrant-built virtual machine. Last login: Fri Jul 20 15:09:52 2018 from 10.0.2.2 $ logout Connection to 127.0.0.1 closed. $ ``` On multi-machine setups, you can login to each vm using the name as displayed on `vagrant status` ``` $ vagrant status Current machine states: node1 running (virtualbox) node2 running (virtualbox) This environment represents multiple VMs. The VMs are all listed above with their current state. $ vagrant ssh node1 Welcome to your Vagrant-built virtual machine. Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2 vagrant@precise64:~$ logout Connection to 127.0.0.1 closed. $ vagrant ssh node2 Welcome to your Vagrant-built virtual machine. Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2 vagrant@precise64:~$ logout Connection to 127.0.0.1 closed. $ ``` On a system with machines running from different projects, you could use the id as listed in `vagrant global-status` ``` $ vagrant global-status id name provider state directory ----------------------------------------------------------------------- 13759ff node1 virtualbox running /Users/user/vagrant/folder The above shows information about all known Vagrant environments on this machine. This data is cached and may not be completely up-to-date (use "vagrant global-status --prune" to prune invalid entries). To interact with any of the machines, you can go to that directory and run Vagrant, or you can use the ID directly with Vagrant commands from any directory. $ vagrant ssh 13759ff Welcome to your Vagrant-built virtual machine. Last login: Fri Jul 20 15:19:36 2018 from 10.0.2.2 vagrant@precise64:~$ logout Connection to 127.0.0.1 closed. $ ``` If a `--` (two hyphens) are found on the command line, any arguments after this are passed directly into the `ssh` executable. This allows you to pass any arbitrary commands to do things such as reverse tunneling down into the `ssh` program. Options -------- * [`-c COMMAND`](#c-command) or `--command COMMAND` - This executes a single SSH command, prints out the stdout and stderr, and exits. * [`-p`](#p) or `--plain` - This does an SSH without authentication, leaving authentication up to the user. SSH client usage ----------------- Vagrant will attempt to use the local SSH client installed on the host machine. On POSIX machines, an SSH client must be installed and available on the PATH. For Windows installations, an SSH client is provided within the installer image. If no SSH client is found on the current PATH, Vagrant will use the SSH client it provided. Depending on the local environment used for running Vagrant, the installer provided SSH client may not work correctly. For example, when using a cygwin or msys2 shell the SSH client will fail to work as expected when run interactively. Installing the SSH package built for the current working environment will resolve this issue. Background Execution --------------------- If the command you specify runs in the background (such as appending a `&` to a shell command), it will be terminated almost immediately. This is because when Vagrant executes the command, it executes it within the context of a shell, and when the shell exits, all of the child processes also exit. To avoid this, you will need to detach the process from the shell. Please Google to learn how to do this for your shell. One method of doing this is the `nohup` command. Pageant on Windows ------------------- The SSH executable will not be able to access Pageant on Windows. While Vagrant is capable of accessing Pageant via internal libraries, the SSH executable does not have support for Pageant. This means keys from Pageant will not be available for forwarding when using the `vagrant ssh` command. Third party programs exist to allow the SSH executable to access Pageant by creating a unix socket for the SSH executable to read. For more information please see [ssh-pageant](https://github.com/cuviper/ssh-pageant). vagrant Creating a Base Box Creating a Base Box ==================== As with [every Vagrant provider](../providers/basic_usage), the Vagrant VirtualBox provider has a custom box format that affects how base boxes are made. Prior to reading this, you should read the [general guide to creating base boxes](../boxes/base). Actually, it would probably be most useful to keep this open in a separate tab as you may be referencing it frequently while creating a base box. That page contains important information about common software to install on the box. Additionally, it is helpful to understand the [basics of the box file format](../boxes/format). > **Advanced topic!** This is a reasonably advanced topic that a beginning user of Vagrant does not need to understand. If you are just getting started with Vagrant, skip this and use an available box. If you are an experienced user of Vagrant and want to create your own custom boxes, this is for you. > > Virtual Machine ---------------- The virtual machine created in VirtualBox can use any configuration you would like, but Vagrant has some hard requirements: * The first network interface (adapter 1) *must* be a NAT adapter. Vagrant uses this to connect the first time. * The MAC address of the first network interface (the NAT adapter) should be noted, since you will need to put it in a Vagrantfile later as the value for `config.vm.base_mac`. To get this value, use the VirtualBox GUI. Other than the above, you are free to customize the base virtual machine as you see fit. Additional Software -------------------- In addition to the software that should be installed based on the [general guide to creating base boxes](../boxes/base), VirtualBox base boxes require some additional software. ### VirtualBox Guest Additions [VirtualBox Guest Additions](https://www.virtualbox.org/manual/ch04.html) must be installed so that things such as shared folders can function. Installing guest additions also usually improves performance since the guest OS can make some optimizations by knowing it is running within VirtualBox. Before installing the guest additions, you will need the linux kernel headers and the basic developer tools. On Ubuntu, you can easily install these like so: ``` $ sudo apt-get install linux-headers-$(uname -r) build-essential dkms ``` #### To install via the GUI: Next, make sure that the guest additions image is available by using the GUI and clicking on "Devices" followed by "Install Guest Additions". Then mount the CD-ROM to some location. On Ubuntu, this usually looks like this: ``` $ sudo mount /dev/cdrom /media/cdrom ``` Finally, run the shell script that matches your system to install the guest additions. For example, for Linux on x86, it is the following: ``` $ sudo sh /media/cdrom/VBoxLinuxAdditions.run ``` If the command succeeds, then the guest additions are now installed! #### To install via the command line: You can find the appropriate guest additions version to match your VirtualBox version by selecting the appropriate version [here](http://download.virtualbox.org/virtualbox/). The examples below use 4.3.8, which was the latest VirtualBox version at the time of writing. ``` wget http://download.virtualbox.org/virtualbox/4.3.8/VBoxGuestAdditions_4.3.8.iso sudo mkdir /media/VBoxGuestAdditions sudo mount -o loop,ro VBoxGuestAdditions_4.3.8.iso /media/VBoxGuestAdditions sudo sh /media/VBoxGuestAdditions/VBoxLinuxAdditions.run rm VBoxGuestAdditions_4.3.8.iso sudo umount /media/VBoxGuestAdditions sudo rmdir /media/VBoxGuestAdditions ``` If you did not install a Desktop environment when you installed the operating system, as recommended to reduce size, the install of the VirtualBox additions should warn you about the lack of OpenGL or Window System Drivers, but you can safely ignore this. If the commands succeed, then the guest additions are now installed! Packaging the Box ------------------ Vagrant includes a simple way to package VirtualBox base boxes. Once you've installed all the software you want to install, you can run this command: ``` $ vagrant package --base my-virtual-machine ``` Where "my-virtual-machine" is replaced by the name of the virtual machine in VirtualBox to package as a base box. It will take a few minutes, but after it is complete, a file "package.box" should be in your working directory which is the new base box. At this point, you've successfully created a base box! Raw Contents ------------- This section documents the actual raw contents of the box file. This is not as useful when creating a base box but can be useful in debugging issues if necessary. A VirtualBox base box is an archive of the resulting files of [exporting](https://www.virtualbox.org/manual/ch08.html#vboxmanage-export) a VirtualBox virtual machine. Here is an example of what is contained in such a box: ``` $ tree . |-- Vagrantfile |-- box-disk1.vmdk |-- box.ovf |-- metadata.json 0 directories, 4 files ``` In addition to the files from exporting a VirtualBox VM, there is the "metadata.json" file used by Vagrant itself. Also, there is a "Vagrantfile." This contains some configuration to properly set the MAC address of the NAT network device, since VirtualBox requires this to be correct in order to function properly. If you are not using `vagrant package --base` above, you will have to set the `config.vm.base_mac` setting in this Vagrantfile to the MAC address of the NAT device without colons. When bringing up a VirtualBox backed machine, Vagrant [imports](https://www.virtualbox.org/manual/ch08.html#vboxmanage-import) the "box.ovf" file found in the box contents.
programming_docs
vagrant VirtualBox VirtualBox =========== Vagrant comes with support out of the box for [VirtualBox](https://www.virtualbox.org), a free, cross-platform consumer virtualization product. The VirtualBox provider is compatible with VirtualBox versions 4.0.x, 4.1.x, 4.2.x, 4.3.x, 5.0.x, 5.1.x, and 5.2.x. Other versions are unsupported and the provider will display an error message. Please note that beta and pre-release versions of VirtualBox are not supported and may not be well-behaved. VirtualBox must be installed on its own prior to using the provider, or the provider will display an error message asking you to install it. VirtualBox can be installed by [downloading](https://www.virtualbox.org/wiki/Downloads) a package or installer for your operating system and using standard procedures to install that package. Use the navigation to the left to find a specific VirtualBox topic to read more about. vagrant Configuration Configuration ============== The VirtualBox provider exposes some additional configuration options that allow you to more finely control your VirtualBox-powered Vagrant environments. GUI vs. Headless ----------------- By default, VirtualBox machines are started in headless mode, meaning there is no UI for the machines visible on the host machine. Sometimes, you want to have a UI. Common use cases include wanting to see a browser that may be running in the machine, or debugging a strange boot issue. You can easily tell the VirtualBox provider to boot with a GUI: ``` config.vm.provider "virtualbox" do |v| v.gui = true end ``` Virtual Machine Name --------------------- You can customize the name that appears in the VirtualBox GUI by setting the `name` property. By default, Vagrant sets it to the containing folder of the Vagrantfile plus a timestamp of when the machine was created. By setting another name, your VM can be more easily identified. ``` config.vm.provider "virtualbox" do |v| v.name = "my_vm" end ``` Linked Clones -------------- By default new machines are created by importing the base box. For large boxes this produces a large overhead in terms of time (the import operation) and space (the new machine contains a copy of the base box's image). Using linked clones can drastically reduce this overhead. Linked clones are based on a master VM, which is generated by importing the base box only once the first time it is required. For the linked clones only differencing disk images are created where the parent disk image belongs to the master VM. ``` config.vm.provider "virtualbox" do |v| v.linked_clone = true end ``` To have backward compatibility: ``` config.vm.provider 'virtualbox' do |v| v.linked_clone = true if Gem::Version.new(Vagrant::VERSION) >= Gem::Version.new('1.8.0') end ``` If you do not want backward compatibility and want to force users to support linked cloning, you can use `Vagrant.require_version` with 1.8. > **Note:** the generated master VMs are currently not removed automatically by Vagrant. This has to be done manually. However, a master VM can only be removed when there are no linked clones connected to it. > > VBoxManage Customizations -------------------------- [VBoxManage](https://www.virtualbox.org/manual/ch08.html) is a utility that can be used to make modifications to VirtualBox virtual machines from the command line. Vagrant exposes a way to call any command against VBoxManage just prior to booting the machine: ``` config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--cpuexecutioncap", "50"] end ``` In the example above, the VM is modified to have a host CPU execution cap of 50%, meaning that no matter how much CPU is used in the VM, no more than 50% would be used on your own host machine. Some details: * The `:id` special parameter is replaced with the ID of the virtual machine being created, so when a VBoxManage command requires an ID, you can pass this special parameter. * Multiple `customize` directives can be used. They will be executed in the order given. There are some convenience shortcuts for memory and CPU settings: ``` config.vm.provider "virtualbox" do |v| v.memory = 1024 v.cpus = 2 end ``` vagrant Common Issues Common Issues ============== This page lists some common issues people run into with Vagrant and VirtualBox as well as solutions for those issues. Hanging on Windows ------------------- If Vagrant commands are hanging on Windows because they're communicating to VirtualBox, this may be caused by a permissions issue with VirtualBox. This is easy to fix. Starting VirtualBox as a normal user or as an administrator will prevent you from using it in the opposite way. Please keep in mind that when Vagrant interacts with VirtualBox, it will interact with it with the same access level as the console running Vagrant. To fix this issue, completely shut down all VirtualBox machines and GUIs. Wait a few seconds. Then, launch VirtualBox only with the access level you wish to use. DNS Not Working ---------------- If DNS is not working within your VM, then you may need to enable a DNS proxy (built-in to VirtualBox). Please [see the StackOverflow answers here](https://serverfault.com/questions/453185/vagrant-virtualbox-dns-10-0-2-3-not-working) for a guide on how to do that. vagrant Networking Networking =========== VirtualBox Internal Network ---------------------------- The Vagrant VirtualBox provider supports using the private network as a VirtualBox [internal network](https://www.virtualbox.org/manual/ch06.html#network_internal). By default, private networks are host-only networks, because those are the easiest to work with. However, internal networks can be enabled as well. To specify a private network as an internal network for VirtualBox use the `virtualbox__intnet` option with the network. The `virtualbox__` (double underscore) prefix tells Vagrant that this option is only for the VirtualBox provider. ``` Vagrant.configure("2") do |config| config.vm.network "private_network", ip: "192.168.50.4", virtualbox__intnet: true end ``` Additionally, if you want to specify that the VirtualBox provider join a specific internal network, specify the name of the internal network: ``` Vagrant.configure("2") do |config| config.vm.network "private_network", ip: "192.168.50.4", virtualbox__intnet: "mynetwork" end ``` VirtualBox NIC Type -------------------- You can specify a specific NIC type for the created network interface by using the `nic_type` parameter. This is not prefixed by `virtualbox__` for legacy reasons, but is VirtualBox-specific. This is an advanced option and should only be used if you know what you are using, since it can cause the network device to not work at all. Example: ``` Vagrant.configure("2") do |config| config.vm.network "private_network", ip: "192.168.50.4", nic_type: "virtio" end ``` vagrant Usage Usage ====== The Vagrant VirtualBox provider is used just like any other provider. Please read the general [basic usage](../providers/basic_usage) page for providers. The value to use for the `--provider` flag is `virtualbox`. The Vagrant VirtualBox provider does not support parallel execution at this time. Specifying the `--parallel` option will have no effect. vagrant Vagrant Push Vagrant Push ============= Heroku Strategy ---------------- [Heroku](https://heroku.com/ "Heroku") is a public PAAS provider that makes it easy to deploy an application. The Vagrant Push Heroku strategy pushes your application's code to Heroku. > **Warning:** The Vagrant Push Heroku strategy requires you have configured your Heroku credentials and created the Heroku application. This documentation will not cover these prerequisites, but you can read more about them in the [Heroku documentation](https://devcenter.heroku.com). > > Only files which are committed to the Git repository will be pushed to Heroku. Additionally, the current working branch is always pushed to the Heroku, even if it is not the "master" branch. The Vagrant Push Heroku strategy supports the following configuration options: * [`app`](#app) - The name of the Heroku application. If the Heroku application does not exist, an exception will be raised. If this value is not specified, the basename of the directory containing the `Vagrantfile` is assumed to be the name of the Heroku application. Since this value can change between users, it is highly recommended that you add the `app` setting to your `Vagrantfile`. * [`dir`](#dir) - The base directory containing the Git repository to upload to Heroku. By default this is the same directory as the Vagrantfile, but you can specify this if you have a nested Git directory. * [`remote`](#remote) - The name of the Git remote where Heroku is configured. The default value is "heroku". ### Usage The Vagrant Push Heroku strategy is defined in the `Vagrantfile` using the `heroku` key: ``` config.push.define "heroku" do |push| push.app = "my_application" end ``` And then push the application to Heroku: ``` $ vagrant push ``` vagrant Vagrant Push Vagrant Push ============= As of version 1.7, Vagrant is capable of deploying or "pushing" application code in the same directory as your Vagrantfile to a remote such as an FTP server. Pushes are defined in an application's `Vagrantfile` and are invoked using the `vagrant push` subcommand. Much like other components of Vagrant, each Vagrant Push plugin has its own configuration options. Please consult the documentation for your Vagrant Push plugin for more information. Here is an example Vagrant Push configuration section in a `Vagrantfile`: ``` config.push.define "ftp" do |push| push.host = "ftp.company.com" push.username = "..." # ... end ``` When the application is ready to be deployed to the FTP server, just run a single command: ``` $ vagrant push ``` Much like [Vagrant Providers](../providers/index "Vagrant Providers"), Vagrant Push also supports multiple backend declarations. Consider the common scenario of a staging and QA environment: ``` config.push.define "staging", strategy: "ftp" do |push| # ... end config.push.define "qa", strategy: "ftp" do |push| # ... end ``` In this scenario, the user must pass the name of the Vagrant Push to the subcommand: ``` $ vagrant push staging ``` Vagrant Push is the easiest way to deploy your application. You can read more in the documentation links on the sidebar. vagrant Vagrant Push Vagrant Push ============= FTP & SFTP Strategy -------------------- Vagrant Push FTP and SFTP strategy pushes the code in your Vagrant development environment to a remote FTP or SFTP server. The Vagrant Push FTP And SFTP strategy supports the following configuration options: * [`host`](#host) - The address of the remote (S)FTP server. If the (S)FTP server is running on a non-standard port, you can specify the port after the address (`host:port`). * [`username`](#username) - The username to use for authentication with the (S)FTP server. * [`password`](#password) - The password to use for authentication with the (S)FTP server. * [`passive`](#passive) - Use passive FTP (default is true). * [`secure`](#secure) - Use secure (SFTP) (default is false). * [`destination`](#destination) - The root destination on the target system to sync the files (default is `/`). * [`exclude`](#exclude) - Add a file or file pattern to exclude from the upload, relative to the `dir`. This value may be specified multiple times and is additive. `exclude` take precedence over `include` values. * [`include`](#include) - Add a file or file pattern to include in the upload, relative to the `dir`. This value may be specified multiple times and is additive. * [`dir`](#dir) - The base directory containing the files to upload. By default this is the same directory as the Vagrantfile, but you can specify this if you have a `src` folder or `bin` folder or some other folder you want to upload. ### Usage The Vagrant Push FTP and SFTP strategy is defined in the `Vagrantfile` using the `ftp` key: ``` config.push.define "ftp" do |push| push.host = "ftp.company.com" push.username = "username" push.password = "password" end ``` And then push the application to the FTP or SFTP server: ``` $ vagrant push ``` vagrant Vagrant Push Vagrant Push ============= Local Exec Strategy -------------------- The Vagrant Push Local Exec strategy allows the user to invoke an arbitrary shell command or script as part of a push. > **Warning:** The Vagrant Push Local Exec strategy does not perform any validation on the correctness of the shell script. > > The Vagrant Push Local Exec strategy supports the following configuration options: * [`script`](#script) - The path to a script on disk (relative to the `Vagrantfile`) to execute. Vagrant will attempt to convert this script to an executable, but an exception will be raised if that fails. * [`inline`](#inline) - The inline script to execute (as a string). * [`args`](#args) (string or array) - Optional arguments to pass to the shell script when executing it as a single string. These arguments must be written as if they were typed directly on the command line, so be sure to escape characters, quote, etc. as needed. You may also pass the arguments in using an array. In this case, Vagrant will handle quoting for you. Please note - only one of the `script` and `inline` options may be specified in a single push definition. ### Usage The Vagrant Push Local Exec strategy is defined in the `Vagrantfile` using the `local-exec` key: Remote path: ``` config.push.define "local-exec" do |push| push.inline = <<-SCRIPT scp -r . server:/var/www/website SCRIPT end ``` Local path: ``` config.push.define "local-exec" do |push| push.inline = <<-SCRIPT cp -r . /var/www/website SCRIPT end ``` For more complicated scripts, you may store them in a separate file and read them from the `Vagrantfile` like so: ``` config.push.define "local-exec" do |push| push.script = "my-script.sh" end ``` And then invoke the push with Vagrant: ``` $ vagrant push ``` ### Script Arguments Refer to [Shell Provisioner](../provisioning/shell). vagrant SSH Settings SSH Settings ============= **Config namespace: `config.ssh`** The settings within `config.ssh` relate to configuring how Vagrant will access your machine over SSH. As with most Vagrant settings, the defaults are typically fine, but you can fine tune whatever you would like. Available Settings ------------------- * [`config.ssh.username`](#config-ssh-username) (string) - This sets the username that Vagrant will SSH as by default. Providers are free to override this if they detect a more appropriate user. By default this is "vagrant", since that is what most public boxes are made as. * [`config.ssh.password`](#config-ssh-password) (string) - This sets a password that Vagrant will use to authenticate the SSH user. Note that Vagrant recommends you use key-based authentication rather than a password (see `private_key_path`) below. If you use a password, Vagrant will automatically insert a keypair if `insert_key` is true. * [`config.ssh.host`](#config-ssh-host) (string) - The hostname or IP to SSH into. By default this is empty, because the provider usually figures this out for you. * [`config.ssh.port`](#config-ssh-port) (integer) - The port to SSH into. By default this is port 22. * [`config.ssh.guest_port`](#config-ssh-guest_port) (integer) - The port on the guest that SSH is running on. This is used by some providers to detect forwarded ports for SSH. For example, if this is set to 22 (the default), and Vagrant detects a forwarded port to port 22 on the guest from port 4567 on the host, Vagrant will attempt to use port 4567 to talk to the guest if there is no other option. * [`config.ssh.private_key_path`](#config-ssh-private_key_path) (string, array of strings) - The path to the private key to use to SSH into the guest machine. By default this is the insecure private key that ships with Vagrant, since that is what public boxes use. If you make your own custom box with a custom SSH key, this should point to that private key. You can also specify multiple private keys by setting this to be an array. This is useful, for example, if you use the default private key to bootstrap the machine, but replace it with perhaps a more secure key later. * [`config.ssh.keys_only`](#config-ssh-keys_only) (boolean) - Only use Vagrant-provided SSH private keys (do not use any keys stored in ssh-agent). The default value is `true`. * [`config.ssh.verify_host_key`](#config-ssh-verify_host_key) (string, symbol) - Perform strict host-key verification. The default value is `:never`. * [`config.ssh.paranoid`](#config-ssh-paranoid) (boolean) - Perform strict host-key verification. The default value is `false`. **Deprecation:** The `config.ssh.paranoid` option is deprecated and will be removed in a future release. Please use the `config.ssh.verify_host_key` option instead. * [`config.ssh.forward_agent`](#config-ssh-forward_agent) (boolean) - If `true`, agent forwarding over SSH connections is enabled. Defaults to false. * [`config.ssh.forward_x11`](#config-ssh-forward_x11) (boolean) - If `true`, X11 forwarding over SSH connections is enabled. Defaults to false. * [`config.ssh.forward_env`](#config-ssh-forward_env) (array of strings) - An array of host environment variables to forward to the guest. If you are familiar with OpenSSH, this corresponds to the `SendEnv` parameter. ``` config.ssh.forward_env = ["CUSTOM_VAR"] ``` * [`config.ssh.insert_key`](#config-ssh-insert_key) (boolean) - If `true`, Vagrant will automatically insert a keypair to use for SSH, replacing Vagrant's default insecure key inside the machine if detected. By default, this is true. This only has an effect if you do not already use private keys for authentication or if you are relying on the default insecure key. If you do not have to care about security in your project and want to keep using the default insecure key, set this to `false`. * [`config.ssh.proxy_command`](#config-ssh-proxy_command) (string) - A command-line command to execute that receives the data to send to SSH on stdin. This can be used to proxy the SSH connection. `%h` in the command is replaced with the host and `%p` is replaced with the port. * [`config.ssh.pty`](#config-ssh-pty) (boolean) - If `true`, pty will be used for provisioning. Defaults to false. This setting is an *advanced feature* that should not be enabled unless absolutely necessary. It breaks some other features of Vagrant, and is really only exposed for cases where it is absolutely necessary. If you can find a way to not use a pty, that is recommended instead. When pty is enabled, it is important to note that command output will *not* be streamed to the UI. Instead, the output will be delivered in full to the UI once the command has completed. * [`config.ssh.keep_alive`](#config-ssh-keep_alive) (boolean) - If `true`, this setting SSH will send keep-alive packets every 5 seconds by default to keep connections alive. * [`config.ssh.shell`](#config-ssh-shell) (string) - The shell to use when executing SSH commands from Vagrant. By default this is `bash -l`. Note that this has no effect on the shell you get when you run `vagrant ssh`. This configuration option only affects the shell to use when executing commands internally in Vagrant. * [`config.ssh.export_command_template`](#config-ssh-export_command_template) (string) - The template used to generate exported environment variables in the active session. This can be useful when using a Bourne incompatible shell like C shell. The template supports two variables which are replaced with the desired environment variable key and environment variable value: `%ENV_KEY%` and `%ENV_VALUE%`. The default template is: ``` config.ssh.export_command_template = 'export %ENV_KEY%="%ENV_VALUE%"' ``` * [`config.ssh.sudo_command`](#config-ssh-sudo_command) (string) - The command to use when executing a command with `sudo`. This defaults to `sudo -E -H %c`. The `%c` will be replaced by the command that is being executed. * [`config.ssh.compression`](#config-ssh-compression) (boolean) - If `false`, this setting will not include the compression setting when ssh'ing into a machine. If this is not set, it will default to `true` and `Compression=yes` will be enabled with ssh. * [`config.ssh.dsa_authentication`](#config-ssh-dsa_authentication) (boolean) - If `false`, this setting will not include `DSAAuthentication` when ssh'ing into a machine. If this is not set, it will default to `true` and `DSAAuthentication=yes` will be used with ssh. * [`config.ssh.extra_args`](#config-ssh-extra_args) (array of strings) - This settings value is passed directly into the ssh executable. This allows you to pass any arbitrary commands to do things such as reverse tunneling down into the ssh program. These options can either be single flags set as strings such as `"-6"` for IPV6 or an array of arguments such as `["-L", "8008:localhost:80"]` for enabling a tunnel from host port 8008 to port 80 on guest.
programming_docs
vagrant Vagrantfile Vagrantfile ============ The primary function of the Vagrantfile is to describe the type of machine required for a project, and how to configure and provision these machines. Vagrantfiles are called Vagrantfiles because the actual literal filename for the file is `Vagrantfile` (casing does not matter unless your file system is running in a strict case sensitive mode). Vagrant is meant to run with one Vagrantfile per project, and the Vagrantfile is supposed to be committed to version control. This allows other developers involved in the project to check out the code, run `vagrant up`, and be on their way. Vagrantfiles are portable across every platform Vagrant supports. The syntax of Vagrantfiles is [Ruby](http://www.ruby-lang.org), but knowledge of the Ruby programming language is not necessary to make modifications to the Vagrantfile, since it is mostly simple variable assignment. In fact, Ruby is not even the most popular community Vagrant is used within, which should help show you that despite not having Ruby knowledge, people are very successful with Vagrant. Lookup Path ------------ When you run any `vagrant` command, Vagrant climbs up the directory tree looking for the first Vagrantfile it can find, starting first in the current directory. So if you run `vagrant` in `/home/mitchellh/projects/foo`, it will search the following paths in order for a Vagrantfile, until it finds one: ``` /home/mitchellh/projects/foo/Vagrantfile /home/mitchellh/projects/Vagrantfile /home/mitchellh/Vagrantfile /home/Vagrantfile /Vagrantfile ``` This feature lets you run `vagrant` from any directory in your project. You can change the starting directory where Vagrant looks for a Vagrantfile by setting the `VAGRANT_CWD` environmental variable to some other path. Load Order and Merging ----------------------- An important concept to understand is how Vagrant loads Vagrantfiles. Vagrant actually loads a series of Vagrantfiles, merging the settings as it goes. This allows Vagrantfiles of varying level of specificity to override prior settings. Vagrantfiles are loaded in the order shown below. Note that if a Vagrantfile is not found at any step, Vagrant continues with the next step. 1. Vagrantfile packaged with the [box](../boxes) that is to be used for a given machine. 2. Vagrantfile in your Vagrant home directory (defaults to `~/.vagrant.d`). This lets you specify some defaults for your system user. 3. Vagrantfile from the project directory. This is the Vagrantfile that you will be modifying most of the time. 4. [Multi-machine overrides](../multi-machine/index) if any. 5. [Provider-specific overrides](../providers/configuration), if any. At each level, settings set will be merged with previous values. What this exactly means depends on the setting. For most settings, this means that the newer setting overrides the older one. However, for things such as defining networks, the networks are actually appended to each other. By default, you should assume that settings will override each other. If the behavior is different, it will be noted in the relevant documentation section. Within each Vagrantfile, you may specify multiple `Vagrant.configure` blocks. All configurations will be merged within a single Vagrantfile in the order they're defined. Available Configuration Options -------------------------------- You can learn more about the available configuration options by clicking the relevant section in the left navigational area. vagrant WinSSH WinSSH ======= The WinSSH communicator is built specifically for the Windows native port of OpenSSH. It does not rely on a POSIX-like environment which removes the requirement of extra software installation (like cygwin) for proper functionality. For more information, see the [Win32-OpenSSH project page](https://github.com/PowerShell/Win32-OpenSSH/). WinSSH Settings ================ The WinSSH communicator uses the same connection configuration options as the SSH communicator. These settings provide the information for the communicator to establish a connection to the VM. The configuration options below are specific to the WinSSH communicator. **Config namespace: `config.winssh`** Available Settings ------------------- * [`config.winssh.forward_agent`](#config-winssh-forward_agent) (boolean) - If `true`, agent forwarding over SSH connections is enabled. Defaults to false. * [`config.winssh.forward_env`](#config-winssh-forward_env) (array of strings) - An array of host environment variables to forward to the guest. If you are familiar with OpenSSH, this corresponds to the `SendEnv` parameter. ``` config.winssh.forward_env = ["CUSTOM_VAR"] ``` * [`config.winssh.proxy_command`](#config-winssh-proxy_command) (string) - A command-line command to execute that receives the data to send to SSH on stdin. This can be used to proxy the SSH connection. `%h` in the command is replaced with the host and `%p` is replaced with the port. * [`config.winssh.keep_alive`](#config-winssh-keep_alive) (boolean) - If `true`, this setting SSH will send keep-alive packets every 5 seconds by default to keep connections alive. * [`config.winssh.shell`](#config-winssh-shell) (string) - The shell to use when executing SSH commands from Vagrant. By default this is `cmd`. Valid values are `"cmd"` or `"powershell"`. Note that this has no effect on the shell you get when you run `vagrant ssh`. This configuration option only affects the shell to use when executing commands internally in Vagrant. * [`config.winssh.export_command_template`](#config-winssh-export_command_template) (string) - The template used to generate exported environment variables in the active session. This can be useful when using a Bourne incompatible shell like C shell. The template supports two variables which are replaced with the desired environment variable key and environment variable value: `%ENV_KEY%` and `%ENV_VALUE%`. The default template for a `cmd` configured shell is: ``` config.winssh.export_command_template = 'set %ENV_KEY%="%ENV_VALUE%"' ``` The default template for a `powershell` configured shell is: ``` config.winssh.export_command_template = '$env:%ENV_KEY%="%ENV_VALUE%"' ``` * [`config.winssh.sudo_command`](#config-winssh-sudo_command) (string) - The command to use when executing a command with `sudo`. This defaults to `%c` (assumes vagrant user is an administrator and needs no escalation). The `%c` will be replaced by the command that is being executed. * [`config.winssh.upload_directory`](#config-winssh-upload_directory) (string) - The upload directory used on the guest to store scripts for execute. This is set to `C:\Windows\Temp` by default. vagrant Minimum Vagrant Version Minimum Vagrant Version ======================== A set of Vagrant version requirements can be specified in the Vagrantfile to enforce that people use a specific version of Vagrant with a Vagrantfile. This can help with compatibility issues that may otherwise arise from using a too old or too new Vagrant version with a Vagrantfile. Vagrant version requirements should be specified at the top of a Vagrantfile with the `Vagrant.require_version` helper: ``` Vagrant.require_version ">= 1.3.5" ``` In the case above, the Vagrantfile will only load if the version loading it is Vagrant 1.3.5 or greater. Multiple requirements can be specified as well: ``` Vagrant.require_version ">= 1.3.5", "< 1.4.0" ``` vagrant Tips & Tricks Tips & Tricks ============== The Vagrantfile is a very flexible configuration format. Since it is just Ruby, there is a lot you can do with it. However, in that same vein, since it is Ruby, there are a lot of ways you can shoot yourself in the foot. When using some of the tips and tricks on this page, please take care to use them correctly. Loop Over VM Definitions ------------------------- If you want to apply a slightly different configuration to many multi-machine machines, you can use a loop to do this. For example, if you wanted to create three machines: ``` (1..3).each do |i| config.vm.define "node-#{i}" do |node| node.vm.provision "shell", inline: "echo hello from node #{i}" end end ``` > **Warning:** The inner portion of multi-machine definitions and provider overrides are lazy-loaded. This can cause issues if you change the value of a variable used within the configs. For example, the loop below *does not work*: > > ``` # THIS DOES NOT WORK! for i in 1..3 do config.vm.define "node-#{i}" do |node| node.vm.provision "shell", inline: "echo hello from node #{i}" end end ``` The `for i in ...` construct in Ruby actually modifies the value of `i` for each iteration, rather than making a copy. Therefore, when you run this, every node will actually provision with the same text. This is an easy mistake to make, and Vagrant cannot really protect against it, so the best we can do is mention it here. Overwrite host locale in ssh session ------------------------------------- Usually, host locale environment variables are passed to guest. It may cause failures if the guest software do not support host locale. One possible solution is override locale in the `Vagrantfile`: ``` ENV["LC_ALL"] = "en_US.UTF-8" Vagrant.configure("2") do |config| # ... end ``` The change is only visible within the `Vagrantfile`. vagrant WinRM Settings WinRM Settings =============== **Config namespace: `config.winrm`** The settings within `config.winrm` relate to configuring how Vagrant will access your Windows guest over WinRM. As with most Vagrant settings, the defaults are typically fine, but you can fine tune whatever you would like. These settings are only used if you've set your communicator type to `:winrm`. Available Settings ------------------- * [`config.winrm.username`](#config-winrm-username) (string) - This sets the username that Vagrant will use to login to the WinRM web service by default. Providers are free to override this if they detect a more appropriate user. By default this is "vagrant," since that is what most public boxes are made as. * [`config.winrm.password`](#config-winrm-password) (string) - This sets a password that Vagrant will use to authenticate the WinRM user. By default this is "vagrant," since that is what most public boxes are made as. * [`config.winrm.host`](#config-winrm-host) (string) - The hostname or IP to connect to the WinRM service. By default this is empty, because the provider usually figures this out for you. * [`config.winrm.port`](#config-winrm-port) (integer) - The WinRM port to connect to, by default 5985. * [`config.winrm.guest_port`](#config-winrm-guest_port) (integer) - The port on the guest that WinRM is running on. This is used by some providers to detect forwarded ports for WinRM. For example, if this is set to 5985 (the default), and Vagrant detects a forwarded port to port 5985 on the guest from port 4567 on the host, Vagrant will attempt to use port 4567 to talk to the guest if there is no other option. * [`config.winrm.transport`](#config-winrm-transport) (symbol)- The transport used for WinRM communication. Valid settings include: `:negotiate`, `:ssl`, and `:plaintext`. The default is `:negotiate`. * [`config.winrm.basic_auth_only`](#config-winrm-basic_auth_only) (boolean) - Whether to use Basic Authentication. Defaults to `false`. If set to `true` you should also use the `:plaintext` transport setting and the Windows machine must be configured appropriately. **Note:** It is strongly recommended that you only use basic authentication for debugging purposes. Credentials will be transferred in plain text. * [`config.winrm.ssl_peer_verification`](#config-winrm-ssl_peer_verification) (boolean) - When set to `false` ssl certificate validation is not performed. * [`config.winrm.timeout`](#config-winrm-timeout) (integer) - The maximum amount of time to wait for a response from the endpoint. This defaults to 60 seconds. Note that this will not "timeout" commands that exceed this amount of time to process, it just requires the endpoint to report the status of the command before the given amount of time passes. * [`config.winrm.retry_limit`](#config-winrm-retry_limit) (integer) - The maximum number of times to retry opening a shell after failure. This defaults to 3. * [`config.winrm.retry_delay`](#config-winrm-retry_delay) (integer) - The amount of time to wait between retries and defaults to 10 seconds. * [`config.winrm.codepage`](#config-winrm-codepage) (string) - The WINRS\_CODEPAGE which is the client's console output code page. The default is 65001 (UTF-8). **Note:** Versions of Windows older than Windows 7/Server 2008 R2 may exhibit undesirable behavior using the default UTF-8 codepage. When using these older versions of Windows, its best to use the native code page of the server's locale. For example, en-US servers will have a codepage of 437. The Windows `chcp` command can be used to determine the value of the native codepage. vagrant Vagrant Settings Vagrant Settings ================= **Config namespace: `config.vagrant`** The settings within `config.vagrant` modify the behavior of Vagrant itself. Available Settings ------------------- * [`config.vagrant.host`](#config-vagrant-host) (string, symbol) - This sets the type of host machine that is running Vagrant. By default this is `:detect`, which causes Vagrant to auto-detect the host. Vagrant needs to know this information in order to perform some host-specific things, such as preparing NFS folders if they're enabled. You should only manually set this if auto-detection fails. * [`config.vagrant.plugins`](#config-vagrant-plugins) - (string, array, hash) - Define plugin, list of plugins, or definition of plugins to install for the local project. Vagrant will require these plugins be installed and available for the project. If the plugins are not available, it will attempt to automatically install them into the local project. When requiring a single plugin, a string can be provided: ``` config.vagrant.plugins = "vagrant-plugin" ``` If multiple plugins are required, they can be provided as an array: ``` config.vagrant.plugins = ["vagrant-plugin", "vagrant-other-plugin"] ``` Plugins can also be defined as a Hash, which supports setting extra options for the plugins. When a Hash is used, the key is the name of the plugin, and the value is a Hash of options for the plugin. For example, to set an explicit version of a plugin to install: ``` config.vagrant.plugins = {"vagrant-scp" => {"version" => "1.0.0"}} ``` Supported options are: + [`entry_point`](#entry_point) - Path for Vagrant to load plugin + [`sources`](#sources) - Custom sources for downloading plugin + [`version`](#version) - Version constraint for plugin * [`config.vagrant.sensitive`](#config-vagrant-sensitive) - (string, array) - Value or list of values that should not be displayed in Vagrant's output. Value(s) will be removed from Vagrant's normal UI output as well as logger output. ``` config.vagrant.sensitive = ["MySecretPassword", ENV["MY_TOKEN"]] ``` vagrant Configuration Version Configuration Version ====================== Configuration versions are the mechanism by which Vagrant 1.1+ is able to remain [backwards compatible](../installation/backwards-compatibility) with Vagrant 1.0.x Vagrantfiles, while introducing dramatically new features and configuration options. If you run `vagrant init` today, the Vagrantfile will be in roughly the following format: ``` Vagrant.configure("2") do |config| # ... end ``` The `"2"` in the first line above represents the version of the configuration object `config` that will be used for configuration for that block (the section between the `do` and the `end`). This object can be very different from version to version. Currently, there are only two supported versions: "1" and "2". Version 1 represents the configuration from Vagrant 1.0.x. "2" represents the configuration for 1.1+ leading up to 2.0.x. When loading Vagrantfiles, Vagrant uses the proper configuration object for each version, and properly merges them, just like any other configuration. The important thing to understand as a general user of Vagrant is that *within a single configuration section*, only a single version can be used. You cannot use the new `config.vm.provider` configurations in a version 1 configuration section. Likewise, `config.vm.forward_port` will not work in a version 2 configuration section (it was renamed). If you want, you can mix and match multiple configuration versions in the same Vagrantfile. This is useful if you found some useful configuration snippet or something that you want to use. Example: ``` Vagrant.configure("1") do |config| # v1 configs... end Vagrant.configure("2") do |config| # v2 configs... end ``` > **What is `Vagrant::Config.run`?** You may see this in Vagrantfiles. This was actually how Vagrant 1.0.x did configuration. In Vagrant 1.1+, this is synonymous with `Vagrant.configure("1")`. > > vagrant Machine Settings Machine Settings ================= **Config namespace: `config.vm`** The settings within `config.vm` modify the configuration of the machine that Vagrant manages. Available Settings ------------------- * [`config.vm.base_mac`](#config-vm-base_mac) (string) - The MAC address to be assigned to the default NAT interface on the guest. *Support for this option is provider dependent.* * [`config.vm.base_address`](#config-vm-base_address) (string) - The IP address to be assigned to the default NAT interface on the guest. *Support for this option is provider dependent.* * [`config.vm.boot_timeout`](#config-vm-boot_timeout) (integer) - The time in seconds that Vagrant will wait for the machine to boot and be accessible. By default this is 300 seconds. * [`config.vm.box`](#config-vm-box) (string) - This configures what [box](../boxes) the machine will be brought up against. The value here should be the name of an installed box or a shorthand name of a box in [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud). * [`config.vm.box_check_update`](#config-vm-box_check_update) (boolean) - If true, Vagrant will check for updates to the configured box on every `vagrant up`. If an update is found, Vagrant will tell the user. By default this is true. Updates will only be checked for boxes that properly support updates (boxes from [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud) or some other versioned box). * [`config.vm.box_download_checksum`](#config-vm-box_download_checksum) (string) - The checksum of the box specified by `config.vm.box_url`. If not specified, no checksum comparison will be done. If specified, Vagrant will compare the checksum of the downloaded box to this value and error if they do not match. Checksum checking is only done when Vagrant must download the box. If this is specified, then `config.vm.box_download_checksum_type` must also be specified. * [`config.vm.box_download_checksum_type`](#config-vm-box_download_checksum_type) (string) - The type of checksum specified by `config.vm.box_download_checksum` (if any). Supported values are currently "md5", "sha1", and "sha256". * [`config.vm.box_download_client_cert`](#config-vm-box_download_client_cert) (string) - Path to a client certificate to use when downloading the box, if it is necessary. By default, no client certificate is used to download the box. * [`config.vm.box_download_ca_cert`](#config-vm-box_download_ca_cert) (string) - Path to a CA cert bundle to use when downloading a box directly. By default, Vagrant will use the Mozilla CA cert bundle. * [`config.vm.box_download_ca_path`](#config-vm-box_download_ca_path) (string) - Path to a directory containing CA certificates for downloading a box directly. By default, Vagrant will use the Mozilla CA cert bundle. * [`config.vm.box_download_insecure`](#config-vm-box_download_insecure) (boolean) - If true, then SSL certificates from the server will not be verified. By default, if the URL is an HTTPS URL, then SSL certs will be verified. * [`config.vm.box_download_location_trusted`](#config-vm-box_download_location_trusted) (boolean) - If true, then all HTTP redirects will be treated as trusted. That means credentials used for initial URL will be used for all subsequent redirects. By default, redirect locations are untrusted so credentials (if specified) used only for initial HTTP request. * [`config.vm.box_url`](#config-vm-box_url) (string, array of strings) - The URL that the configured box can be found at. If `config.vm.box` is a shorthand to a box in [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud) then this value does not need to be specified. Otherwise, it should point to the proper place where the box can be found if it is not installed. This can also be an array of multiple URLs. The URLs will be tried in order. Note that any client certificates, insecure download settings, and so on will apply to all URLs in this list. The URLs can also be local files by using the `file://` scheme. For example: "file:///tmp/test.box". * [`config.vm.box_version`](#config-vm-box_version) (string) - The version of the box to use. This defaults to ">= 0" (the latest version available). This can contain an arbitrary list of constraints, separated by commas, such as: `>= 1.0, < 1.5`. When constraints are given, Vagrant will use the latest available box satisfying these constraints. * [`config.vm.communicator`](#config-vm-communicator) (string) - The communicator type to use to connect to the guest box. By default this is `"ssh"`, but should be changed to `"winrm"` for Windows guests. * [`config.vm.graceful_halt_timeout`](#config-vm-graceful_halt_timeout) (integer) - The time in seconds that Vagrant will wait for the machine to gracefully halt when `vagrant halt` is called. Defaults to 60 seconds. * [`config.vm.guest`](#config-vm-guest) (string, symbol) - The guest OS that will be running within this machine. This defaults to `:linux`, and Vagrant will auto-detect the proper distro. However, this should be changed to `:windows` for Windows guests. Vagrant needs to know this information to perform some guest OS-specific things such as mounting folders and configuring networks. * [`config.vm.hostname`](#config-vm-hostname) (string) - The hostname the machine should have. Defaults to nil. If nil, Vagrant will not manage the hostname. If set to a string, the hostname will be set on boot. If set, Vagrant will update `/etc/hosts` on the guest with the configured hostname. * [`config.vm.ignore_box_vagrantfile`](#config-vm-ignore_box_vagrantfile) (boolean) - If true, Vagrant will not load the the settings found inside a boxes Vagrantfile, if present. Defaults to `false`. * [`config.vm.network`](#config-vm-network) - Configures [networks](../networking/index) on the machine. Please see the networking page for more information. * [`config.vm.post_up_message`](#config-vm-post_up_message) (string) - A message to show after `vagrant up`. This will be shown to the user and is useful for containing instructions such as how to access various components of the development environment. * [`config.vm.provider`](#config-vm-provider) - Configures [provider-specific configuration](../providers/configuration), which is used to modify settings which are specific to a certain [provider](../providers/index). If the provider you are configuring does not exist or is not setup on the system of the person who runs `vagrant up`, Vagrant will ignore this configuration block. This allows a Vagrantfile that is configured for many providers to be shared among a group of people who may not have all the same providers installed. * [`config.vm.provision`](#config-vm-provision) - Configures [provisioners](../provisioning/index) on the machine, so that software can be automatically installed and configured when the machine is created. Please see the page on provisioners for more information on how this setting works. * [`config.vm.synced_folder`](#config-vm-synced_folder) - Configures [synced folders](../synced-folders/index) on the machine, so that folders on your host machine can be synced to and from the guest machine. Please see the page on synced folders for more information on how this setting works. * [`config.vm.usable_port_range`](#config-vm-usable_port_range) (range) - A range of ports Vagrant can use for handling port collisions and such. Defaults to `2200..2250`.
programming_docs
vagrant Multi-Machine Multi-Machine ============== Vagrant is able to define and control multiple guest machines per Vagrantfile. This is known as a "multi-machine" environment. These machines are generally able to work together or are somehow associated with each other. Here are some use-cases people are using multi-machine environments for today: * Accurately modeling a multi-server production topology, such as separating a web and database server. * Modeling a distributed system and how they interact with each other. * Testing an interface, such as an API to a service component. * Disaster-case testing: machines dying, network partitions, slow networks, inconsistent world views, etc. Historically, running complex environments such as these was done by flattening them onto a single machine. The problem with that is that it is an inaccurate model of the production setup, which can behave far differently. Using the multi-machine feature of Vagrant, these environments can be modeled in the context of a single Vagrant environment without losing any of the benefits of Vagrant. Defining Multiple Machines --------------------------- Multiple machines are defined within the same project [Vagrantfile](../vagrantfile/index) using the `config.vm.define` method call. This configuration directive is a little funny, because it creates a Vagrant configuration within a configuration. An example shows this best: ``` Vagrant.configure("2") do |config| config.vm.provision "shell", inline: "echo Hello" config.vm.define "web" do |web| web.vm.box = "apache" end config.vm.define "db" do |db| db.vm.box = "mysql" end end ``` As you can see, `config.vm.define` takes a block with another variable. This variable, such as `web` above, is the *exact* same as the `config` variable, except any configuration of the inner variable applies only to the machine being defined. Therefore, any configuration on `web` will only affect the `web` machine. And importantly, you can continue to use the `config` object as well. The configuration object is loaded and merged before the machine-specific configuration, just like other Vagrantfiles within the [Vagrantfile load order](../vagrantfile/index#load-order). If you are familiar with programming, this is similar to how languages have different variable scopes. When using these scopes, order of execution for things such as provisioners becomes important. Vagrant enforces ordering outside-in, in the order listed in the Vagrantfile. For example, with the Vagrantfile below: ``` Vagrant.configure("2") do |config| config.vm.provision :shell, inline: "echo A" config.vm.define :testing do |test| test.vm.provision :shell, inline: "echo B" end config.vm.provision :shell, inline: "echo C" end ``` The provisioners in this case will output "A", then "C", then "B". Notice that "B" is last. That is because the ordering is outside-in, in the order of the file. If you want to apply a slightly different configuration to multiple machines, see [this tip](../vagrantfile/tips#loop-over-vm-definitions). Controlling Multiple Machines ------------------------------ The moment more than one machine is defined within a Vagrantfile, the usage of the various `vagrant` commands changes slightly. The change should be mostly intuitive. Commands that only make sense to target a single machine, such as `vagrant ssh`, now *require* the name of the machine to control. Using the example above, you would say `vagrant ssh web` or `vagrant ssh db`. Other commands, such as `vagrant up`, operate on *every* machine by default. So if you ran `vagrant up`, Vagrant would bring up both the web and DB machine. You could also optionally be specific and say `vagrant up web` or `vagrant up db`. Additionally, you can specify a regular expression for matching only certain machines. This is useful in some cases where you specify many similar machines, for example if you are testing a distributed service you may have a `leader` machine as well as a `follower0`, `follower1`, `follower2`, etc. If you want to bring up all the followers but not the leader, you can just do `vagrant up /follower[0-9]/`. If Vagrant sees a machine name within forward slashes, it assumes you are using a regular expression. Communication Between Machines ------------------------------- In order to facilitate communication within machines in a multi-machine setup, the various [networking](../networking/index) options should be used. In particular, the [private network](../networking/private_network) can be used to make a private network between multiple machines and the host. Specifying a Primary Machine ----------------------------- You can also specify a *primary machine*. The primary machine will be the default machine used when a specific machine in a multi-machine environment is not specified. To specify a default machine, just mark it primary when defining it. Only one primary machine may be specified. ``` config.vm.define "web", primary: true do |web| # ... end ``` Autostart Machines ------------------- By default in a multi-machine environment, `vagrant up` will start all of the defined machines. The `autostart` setting allows you to tell Vagrant to *not* start specific machines. Example: ``` config.vm.define "web" config.vm.define "db" config.vm.define "db_follower", autostart: false ``` When running `vagrant up` with the settings above, Vagrant will automatically start the "web" and "db" machines, but will not start the "db\_follower" machine. You can manually force the "db\_follower" machine to start by running `vagrant up db_follower`. vagrant Vagrant Share Vagrant Share ============== Vagrant Share allows you to share your Vagrant environment with anyone in the world, enabling collaboration directly in your Vagrant environment in almost any network environment with just a single command: `vagrant share`. Vagrant share has three primary modes or features. These features are not mutually exclusive, meaning that any combination of them can be active at any given time: * **HTTP sharing** will create a URL that you can give to anyone. This URL will route directly into your Vagrant environment. The person using this URL does not need Vagrant installed, so it can be shared with anyone. This is useful for testing webhooks or showing your work to clients, teammates, managers, etc. * **SSH sharing** will allow instant SSH access to your Vagrant environment by anyone by running `vagrant connect --ssh` on the remote side. This is useful for pair programming, debugging ops problems, etc. * **General sharing** allows anyone to access any exposed port of your Vagrant environment by running `vagrant connect` on the remote side. This is useful if the remote side wants to access your Vagrant environment as if it were a computer on the LAN. The details of each are covered in their specific section in the sidebar to the left. We also have a section where we go into detail about the security implications of this feature. Installation ------------- Vagrant Share is a Vagrant plugin that must be installed. It is not included with Vagrant system packages. To install the Vagrant Share plugin, run the following command: ``` $ vagrant plugin install vagrant-share ``` Vagrant Share requires [ngrok](https://ngrok.com) to be used. vagrant Custom Provider Custom Provider ================ > **Warning: Advanced Topic!** This topic is related to developing Vagrant plugins. If you are not interested in this or you are just starting with Vagrant, it is safe to skip this page. > > If you are developing a [custom Vagrant provider](../plugins/providers), you will need to do a tiny bit more work in order for it to work well with Vagrant Share. For now, this is only one step: * [`public_address`](#public_address) provider capability - You must implement this capability to return a string that is an address that can be used to access the guest from Vagrant. This does not need to be a globally routable address, it only needs to be accessible from the machine running Vagrant. If you cannot detect an address, return `nil`. vagrant HTTP Sharing HTTP Sharing ============= Vagrant Share can create a publicly accessible URL endpoint to access an HTTP server running in your Vagrant environment. This is known as "HTTP sharing," and is enabled by default when `vagrant share` is used. Because this mode of sharing creates a publicly accessible URL, the accessing party does not need to have Vagrant installed in order to view your environment. This has a number of useful use cases: you can test webhooks by exposing your Vagrant environment to the internet, you can show your work to clients, teammates, or managers, etc. Usage ------ To use HTTP sharing, simply run `vagrant share`: ``` $ vagrant share ==> default: Detecting network information for machine... default: Local machine address: 192.168.84.130 default: Local HTTP port: 9999 default: Local HTTPS port: disabled ==> default: Creating Vagrant Share session... ==> default: HTTP URL: http://b1fb1f3f.ngrok.io ``` Vagrant detects where your HTTP server is running in your Vagrant environment and outputs the endpoint that can be used to access this share. Just give this URL to anyone you want to share it with, and they will be able to access your Vagrant environment! If Vagrant has trouble detecting the port of your servers in your environment, use the `--http` and/or `--https` flags to be more explicit. The share will be accessible for the duration that `vagrant share` is running. Press `Ctrl-C` to quit the sharing session. > **Warning:** This URL is accessible by *anyone* who knows it, so be careful if you are sharing sensitive information. > > Disabling ---------- If you want to disable the creation of the publicly accessible endpoint, run `vagrant share` with the `--disable-http` flag. This will share your environment using one of the other methods available, and will not create the URL endpoint. Missing Assets --------------- Shared web applications must use **relative paths** for loading any local assets such as images, stylesheets, javascript. The web application under development will be accessed remotely. This means that if you have any hardcoded asset (images, stylesheets, etc.) URLs such as `<img src="http://127.0.0.1/header.png">`, then they will not load for people accessing your share. Most web frameworks or toolkits have settings or helpers to generate relative paths. For example, if you are a WordPress developer, the [Root Relative URLs](http://wordpress.org/plugins/root-relative-urls/) plugin will automatically do this for you. Relative URLs to assets is generally a best practice in general, so you should do this anyways! HTTPS (SSL) ------------ Vagrant Share can also expose an SSL port that can be accessed over SSL. Creating an HTTPS share requires a non-free ngrok account. `vagrant share` by default looks for any SSL traffic on port 443 in your development environment. If it cannot find any, then SSL is disabled by default. The HTTPS share can be explicitly disabled using the `--disable-https` flag. vagrant Security Security ========= Sharing your Vagrant environment understandably raises a number of security concerns. The primary security mechanism for Vagrant Share is security through obscurity along with an encryption key for SSH. Additionally, there are several configuration options made available to help control access and manage security: * [`--disable-http`](#disable-http) will not create a publicly accessible HTTP URL. When this is set, the only way to access the share is with `vagrant connect`. In addition to these options, there are other features we've built to help: * Vagrant share uses end-to-end TLS for non-HTTP connections. So even unencrypted TCP streams are encrypted through the various proxies and only unencrypted during the final local communication between the local proxy and the Vagrant environment. * SSH keys are encrypted by default, using a password that is not transmitted to our servers or across the network at all. * SSH is not shared by default, it must explicitly be shared with the `--ssh` flag. Most importantly, you must understand that by running `vagrant share`, you are making your Vagrant environment accessible by anyone who knows the share name. When share is not running, it is not accessible. vagrant Vagrant Connect Vagrant Connect ================ Vagrant can share any or *every* port to your Vagrant environment, not just SSH and HTTP. The `vagrant connect` command gives the connecting person a static IP they can use to communicate to the shared Vagrant environment. Any TCP traffic sent to this IP is sent to the shared Vagrant environment. Usage ------ Just call `vagrant share --full`. This will automatically share as many ports as possible for remote connections. Please see [the Vagrant share security page](security) for more information. Note the share name at the end of calling `vagrant share --full`, and give this to the person who wants to connect to your machine. They simply have to call `vagrant connect NAME`. This will give them a static IP they can use to access your Vagrant environment. How does it work? ------------------ `vagrant connect` works by doing what Vagrant does best: managing virtual machines. `vagrant connect` creates a tiny virtual machine that takes up only around 20 MB in RAM, using VirtualBox or VMware (more provider support is coming soon). Any traffic sent to this tiny virtual machine is then proxied through to the shared Vagrant environment as if it were directed at it. Beware: Vagrant Insecure Key ----------------------------- If the Vagrant environment or box you are using is protected with the Vagrant insecure keypair (most public boxes are), then SSH will be easily available to anyone who connects. While hopefully you are sharing with someone you trust, in certain environments you might be sharing with a class, or a conference, and you do not want them to be able to SSH in. In this case, we recommend changing or removing the insecure key from the Vagrant machine. Finally, we want to note that we are working on making it so that when Vagrant share is used, the Vagrant private key is actively rejected unless explicitly allowed. This feature is not yet done, however. vagrant SSH Sharing SSH Sharing ============ Vagrant share makes it trivially easy to allow remote SSH access to your Vagrant environment by supplying the `--ssh` flag to `vagrant share`. Easy SSH sharing is incredibly useful if you want to give access to a colleague for troubleshooting ops issues. Additionally, it enables pair programming with a Vagrant environment, if you want! SSH sharing is disabled by default as a security measure. To enable SSH sharing, simply supply the `--ssh` flag when calling `vagrant share`. Usage ------ Just run `vagrant share --ssh`! When SSH sharing is enabled, Vagrant generates a brand new keypair for SSH access. The public key portion is automatically inserted into the Vagrant machine, and the private key portion is provided to the user connecting to the Vagrant share. This private key is encrypted using a password that you will be prompted for. This password is *never* transmitted across the network by Vagrant, and is an extra layer of security preventing anyone who may know your share name from easily accessing your machine. After running `vagrant share --ssh`, it will output the name of your share: ``` $ vagrant share --ssh ==> default: Detecting network information for machine... default: Local machine address: 192.168.84.130 ==> default: Generating new SSH key... default: Please enter a password to encrypt the key: default: Repeat the password to confirm: default: Inserting generated SSH key into machine... default: Local HTTP port: disabled default: Local HTTPS port: disabled default: SSH Port: 2200 ==> default: Creating Vagrant Share session... share: Cloning VMware VM: 'hashicorp/vagrant-share'. This can take some time... share: Verifying vmnet devices are healthy... share: Preparing network adapters... share: Starting the VMware VM... share: Waiting for machine to boot. This may take a few minutes... share: SSH address: 192.168.84.134:22 share: SSH username: tc share: SSH auth method: password share: share: Inserting generated public key within guest... share: Removing insecure key from the guest if it's present... share: Key inserted! Disconnecting and reconnecting using new SSH key... share: Machine booted and ready! share: Forwarding ports... share: -- 31338 => 65534 share: -- 22 => 2202 share: SSH address: 192.168.84.134:22 share: SSH username: tc share: SSH auth method: password share: Configuring network adapters within the VM... ==> share: ==> share: Your Vagrant Share is running! Name: bazaar_wolf:sultan_oasis ==> share: ==> share: You're sharing with SSH access. This means that another can SSH to ==> share: your Vagrant machine by running: ==> share: ==> share: vagrant connect --ssh bazaar_wolf:sultan_oasis ==> share: ``` Anyone can then SSH directly to your Vagrant environment by running `vagrant connect --ssh NAME` where NAME is the name of the share outputted previously. ``` $ vagrant connect --ssh bazaar_wolf:sultan_oasis Loading share 'bazaar_wolf:sultan_oasis'... The SSH key to connect to this share is encrypted. You will require the password entered when creating the share to decrypt it. Verify you have access to this password before continuing. Press enter to continue, or Ctrl-C to exit now. Password for the private key: Executing SSH... Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.8.0-29-generic x86_64) * Documentation: https://help.ubuntu.com/ Last login: Fri Mar 7 17:44:50 2014 from 192.168.163.1 vagrant@vagrant:~$ ``` If the private key is encrypted (the default behavior), then the connecting person will be prompted for the password to decrypt the private key. vagrant Box File Format Box File Format ================ In the past, boxes were just [tar files](https://en.wikipedia.org/wiki/Tar_(computing)) of VirtualBox exports. With Vagrant supporting multiple [providers](../providers/index) and <versioning> now, box files are slightly more complicated. Box files made for Vagrant 1.0.x (the VirtualBox export `tar` files) continue to work with Vagrant today. When Vagrant encounters one of these old boxes, it automatically updates it internally to the new format. Today, there are three different components: * Box File - This is a compressed (`tar`, `tar.gz`, `zip`) file that is specific to a single provider and can contain anything. Vagrant core does not ever use the contents of this file. Instead, they are passed to the provider. Therefore, a VirtualBox box file has different contents from a VMware box file and so on. * Box Catalog Metadata - This is a JSON document (typically exchanged during interactions with [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud)) that specifies the name of the box, a description, available versions, available providers, and URLs to the actual box files (next component) for each provider and version. If this catalog metadata does not exist, a box file can still be added directly, but it will not support versioning and updating. * Box Information - This is a JSON document that can provide additional information about the box that displays when a user runs `vagrant box list -i`. More information is provided [here](info). The first two components are covered in more detail below. Box File --------- The actual box file is the required portion for Vagrant. It is recommended you always use a metadata file alongside a box file, but direct box files are supported for legacy reasons in Vagrant. Box files are compressed using `tar`, `tar.gz`, or `zip`. The contents of the archive can be anything, and is specific to each [provider](../providers/index). Vagrant core itself only unpacks the boxes for use later. Within the archive, Vagrant does expect a single file: `metadata.json`. This is a JSON file that is completely unrelated to the above box catalog metadata component; there is only one `metadata.json` per box file (inside the box file), whereas one catalog metadata JSON document can describe multiple versions of the same box, potentially spanning multiple providers. `metadata.json` must contain at least the "provider" key with the provider the box is for. Vagrant uses this to verify the provider of the box. For example, if your box was for VirtualBox, the `metadata.json` would look like this: ``` { "provider": "virtualbox" } ``` If there is no `metadata.json` file or the file does not contain valid JSON with at least a "provider" key, then Vagrant will error when adding the box, because it cannot verify the provider. Other keys/values may be added to the metadata without issue. The value of the metadata file is passed opaquely into Vagrant and plugins can make use of it. At this point, Vagrant core does not use any other keys in this file. Box Metadata ------------- The metadata is an optional component for a box (but highly recommended) that enables <versioning>, updating, multiple providers from a single file, and more. > **You do not need to manually make the metadata.** If you have an account with [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud), you can create boxes there, and HashiCorp's Vagrant Cloud automatically creates the metadata for you. The format is still documented here. > > It is a JSON document, structured in the following way: ``` { "name": "hashicorp/precise64", "description": "This box contains Ubuntu 12.04 LTS 64-bit.", "versions": [ { "version": "0.1.0", "providers": [ { "name": "virtualbox", "url": "http://somewhere.com/precise64_010_virtualbox.box", "checksum_type": "sha1", "checksum": "foo" } ] } ] } ``` As you can see, the JSON document can describe multiple versions of a box, multiple providers, and can add/remove providers in different versions. This JSON file can be passed directly to `vagrant box add` from the local filesystem using a file path or via a URL, and Vagrant will install the proper version of the box. In this case, the value for the `url` key in the JSON can also be a file path. If multiple providers are available, Vagrant will ask what provider you want to use.
programming_docs
vagrant Creating a Base Box Creating a Base Box ==================== There are a special category of boxes known as "base boxes." These boxes contain the bare minimum required for Vagrant to function, are generally not made by repackaging an existing Vagrant environment (hence the "base" in the "base box"). For example, the Ubuntu boxes provided by the Vagrant project (such as "precise64") are base boxes. They were created from a minimal Ubuntu install from an ISO, rather than repackaging an existing environment. Base boxes are extremely useful for having a clean slate starting point from which to build future development environments. The Vagrant project hopes in the future to be able to provide base boxes for many more operating systems. Until then, this page documents how you can create your own base box. > **Advanced topic!** Creating a base box can be a time consuming and tedious process, and is not recommended for new Vagrant users. If you are just getting started with Vagrant, we recommend trying to find existing base boxes to use first. > > What's in a Base Box? ---------------------- A base box typically consists of only a bare minimum set of software for Vagrant to function. As an example, a Linux box may contain only the following: * Package manager * SSH * SSH user so Vagrant can connect * Perhaps Chef, Puppet, etc. but not strictly required. In addition to this, each [provider](../providers/index) may require additional software. For example, if you are making a base box for VirtualBox, you will want to include the VirtualBox guest additions so that shared folders work properly. But if you are making an AWS base box, this is not required. Creating a Base Box -------------------- Creating a base box is actually provider-specific. This means that depending on if you are using VirtualBox, VMware, AWS, etc. the process for creating a base box is different. Because of this, this one document cannot be a full guide to creating a base box. This page will document some general guidelines for creating base boxes, however, and will link to provider-specific guides for creating base boxes. Provider-specific guides for creating base boxes are linked below: * [Docker Base Boxes](../docker/boxes) * [Hyper-V Base Boxes](../hyperv/boxes) * [VMware Base Boxes](../vmware/boxes) * [VirtualBox Base Boxes](../virtualbox/boxes) ### Packer and Vagrant Cloud We strongly recommend using [Packer](https://www.packer.io) to create reproducible builds for your base boxes, as well as automating the builds. Read more about [automating Vagrant box creation with Packer](https://www.packer.io/guides/packer-on-cicd/build-image-in-cicd.html) in the Packer documentation. ### Disk Space When creating a base box, make sure the user will have enough disk space to do interesting things, without being annoying. For example, in VirtualBox, you should create a dynamically resizing drive with a large maximum size. This causes the actual footprint of the drive to be small initially, but to dynamically grow towards the max size as disk space is needed, providing the most flexibility for the end user. If you are creating an AWS base box, do not force the AMI to allocate terabytes of EBS storage, for example, since the user can do that on their own. But you should default to mounting ephemeral drives, because they're free and provide a lot of disk space. ### Memory Like disk space, finding the right balance of the default amount of memory is important. For most providers, the user can modify the memory with the Vagrantfile, so do not use too much by default. It would be a poor user experience (and mildly shocking) if a `vagrant up` from a base box instantly required many gigabytes of RAM. Instead, choose a value such as 512MB, which is usually enough to play around and do interesting things with a Vagrant machine, but can easily be increased when needed. ### Peripherals (Audio, USB, etc.) Disable any non-necessary hardware in a base box such as audio and USB controllers. These are generally unnecessary for Vagrant usage and, again, can be easily added via the Vagrantfile in most cases. Default User Settings ---------------------- Just about every aspect of Vagrant can be modified. However, Vagrant does expect some defaults which will cause your base box to "just work" out of the box. You should create these as defaults if you intend to publicly distribute your box. If you are creating a base box for private use, you should try *not* to follow these, as they open up your base box to security risks (known users, passwords, private keys, etc.). ### "vagrant" User By default, Vagrant expects a "vagrant" user to SSH into the machine as. This user should be setup with the [insecure keypair](https://github.com/hashicorp/vagrant/tree/master/keys) that Vagrant uses as a default to attempt to SSH. Also, even though Vagrant uses key-based authentication by default, it is a general convention to set the password for the "vagrant" user to "vagrant". This lets people login as that user manually if they need to. To configure SSH access with the insecure keypair, place the public key into the `~/.ssh/authorized_keys` file for the "vagrant" user. Note that OpenSSH is very picky about file permissions. Therefore, make sure that `~/.ssh` has `0700` permissions and the authorized keys file has `0600` permissions. When Vagrant boots a box and detects the insecure keypair, it will automatically replace it with a randomly generated keypair for additional security while the box is running. ### Root Password: "vagrant" Vagrant does not actually use or expect any root password. However, having a generally well known root password makes it easier for the general public to modify the machine if needed. Publicly available base boxes usually use a root password of "vagrant" to keep things easy. ### Password-less Sudo This is **important!**. Many aspects of Vagrant expect the default SSH user to have passwordless sudo configured. This lets Vagrant configure networks, mount synced folders, install software, and more. To begin, some minimal installations of operating systems do not even include `sudo` by default. Verify that you install `sudo` in some way. After installing sudo, configure it (usually using `visudo`) to allow passwordless sudo for the "vagrant" user. This can be done with the following line at the end of the configuration file: ``` vagrant ALL=(ALL) NOPASSWD: ALL ``` Additionally, Vagrant does not use a pty or tty by default when connected via SSH. You will need to make sure there is no line that has `requiretty` in it. Remove that if it exists. This allows sudo to work properly without a tty. Note that you *can* configure Vagrant to request a pty, which lets you keep this configuration. But Vagrant by default does not do this. ### SSH Tweaks In order to keep SSH speedy even when your machine or the Vagrant machine is not connected to the internet, set the `UseDNS` configuration to `no` in the SSH server configuration. This avoids a reverse DNS lookup on the connecting SSH client which can take many seconds. Windows Boxes -------------- Supported Windows guest operating systems: - Windows 7 - Windows 8 - Windows Server 2008 - Windows Server 2008 R2 - Windows Server 2012 - Windows Server 2012 R2 Windows Server 2003 and Windows XP are *not* supported, but if you are a die hard XP fan [this](https://stackoverflow.com/a/18593425/18475) may help you. ### Base Windows Configuration * Turn off UAC * Disable complex passwords * Disable "Shutdown Tracker" * Disable "Server Manager" starting at login (for non-Core) In addition to disabling UAC in the control panel, you also must disable UAC in the registry. This may vary from Windows version to Windows version, but Windows 8/8.1 use the command below. This will allow some things like automated Puppet installs to work within Vagrant Windows base boxes. ``` reg add HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLUA /d 0 /t REG_DWORD /f /reg:64 ``` ### Base WinRM Configuration To enable and configure WinRM you will need to set the WinRM service to auto-start and allow unencrypted basic auth (obviously this is not secure). Run the following commands from a regular Windows command prompt: ``` winrm quickconfig -q winrm set winrm/config/winrs @{MaxMemoryPerShellMB="512"} winrm set winrm/config @{MaxTimeoutms="1800000"} winrm set winrm/config/service @{AllowUnencrypted="true"} winrm set winrm/config/service/auth @{Basic="true"} sc config WinRM start= auto ``` ### Additional WinRM 1.1 Configuration These additional configuration steps are specific to Windows Server 2008 (WinRM 1.1). For Windows Server 2008 R2, Windows 7 and later versions of Windows you can ignore this section. 1. Ensure the Windows PowerShell feature is installed 2. Change the WinRM port to 5985 or upgrade to WinRM 2.0 The following commands will change the WinRM 1.1 port to what's expected by Vagrant: ``` netsh firewall add portopening TCP 5985 "Port 5985" winrm set winrm/config/listener?Address=*+Transport=HTTP @{Port="5985"} ``` Other Software --------------- At this point, you have all the common software you absolutely *need* for your base box to work with Vagrant. However, there is some additional software you can install if you wish. While we plan on it in the future, Vagrant still does not install Chef or Puppet automatically when using those provisioners. Users can use a shell provisioner to do this, but if you want Chef/Puppet to just work out of the box, you will have to install them in the base box. Installing this is outside the scope of this page, but should be fairly straightforward. In addition to this, feel free to install and configure any other software you want available by default for this base box. Packaging the Box ------------------ Packaging the box into a `box` file is provider-specific. Please refer to the provider-specific documentation for creating a base box. Some provider-specific guides are linked to towards the top of this page. Distributing the Box --------------------- You can distribute the box file however you would like. However, if you want to support versioning, putting multiple providers at a single URL, pushing updates, analytics, and more, we recommend you add the box to [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud). You can upload both public and private boxes to this service. Testing the Box ---------------- To test the box, pretend you are a new user of Vagrant and give it a shot: ``` $ vagrant box add --name my-box /path/to/the/new.box ... $ vagrant init my-box ... $ vagrant up ... ``` If you made a box for some other provider, be sure to specify the `--provider` option to `vagrant up`. If the up succeeded, then your box worked! vagrant Box Versioning Box Versioning =============== Since Vagrant 1.5, boxes support versioning. This allows the people who make boxes to push updates to the box, and the people who use the box have a simple workflow for checking for updates, updating their boxes, and seeing what has changed. If you are just getting started with Vagrant, box versioning is not too important, and we recommend learning about some other topics first. But if you are using Vagrant on a team or plan on creating your own boxes, versioning is very important. Luckily, having versioning built right in to Vagrant makes it easy to use and fit nicely into the Vagrant workflow. This page will cover how to use versioned boxes. It does *not* cover how to update your own custom boxes with versions. That is covered in [creating a base box](base). Viewing Versions and Updating ------------------------------ `vagrant box list` only shows *installed* versions of boxes. If you want to see all available versions of a box, you will have to find the box on [HashiCorp's Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud). An easy way to find a box is to use the url `https://vagrantcloud.com/$USER/$BOX`. For example, for the `hashicorp/precise64` box, you can find information about it at `https://vagrantcloud.com/hashicorp/precise64`. You can check if the box you are using is outdated with `vagrant box outdated`. This can check if the box in your current Vagrant environment is outdated as well as any other box installed on the system. Finally, you can update boxes with `vagrant box update`. This will download and install the new box. This *will not* magically update running Vagrant environments. If a Vagrant environment is already running, you will have to destroy and recreate it to acquire the new updates in the box. The update command just downloads these updates locally. Version Constraints -------------------- You can constrain a Vagrant environment to a specific version or versions of a box using the [Vagrantfile](../vagrantfile/index) by specifying the `config.vm.box_version` option. If this option is not specified, the latest version is always used. This is equivalent to specifying a constraint of ">= 0". The box version configuration can be a specific version or a constraint of versions. Constraints can be any combination of the following: `= X`, `> X`, `< X`, `>= X`, `<= X`, `~> X`. You can combine multiple constraints by separating them with commas. All the constraints should be self explanatory except perhaps for `~>`, known as the "pessimistic constraint". Examples explain it best: `~> 1.0` is equivalent to `>= 1.0, < 2.0`. And `~> 1.1.5` is equivalent to `>= 1.1.5, < 1.2.0`. You can choose to handle versions however you see fit. However, many boxes in the public catalog follow [semantic versioning](http://semver.org/). Basically, only the first number (the "major version") breaks backwards compatibility. In terms of Vagrant boxes, this means that any software that runs in version "1.1.5" of a box should work in "1.2" and "1.4.5" and so on, but "2.0" might introduce big changes that break your software. By following this convention, the best constraint is `~> 1.0` because you know it is safe no matter what version is in that range. Please note that, while the semantic versioning specification allows for more than three points and pre-release or beta versions, Vagrant boxes must be of the format `X.Y.Z` where `X`, `Y`, and `Z` are all positive integers. Automatic Update Checking -------------------------- Using the [Vagrantfile](../vagrantfile/index), you can also configure Vagrant to automatically check for updates during any `vagrant up`. This is enabled by default, but can easily be disabled with `config.vm.box_check_update = false` in your Vagrantfile. When this is enabled, Vagrant will check for updates on every `vagrant up`, not just when the machine is being created from scratch, but also when it is resuming, starting after being halted, etc. If an update is found, Vagrant will output a warning to the user letting them know an update is available. That user can choose to ignore the warning for now, or can update the box by running `vagrant box update`. Vagrant can not and does not automatically download the updated box and update the machine because boxes can be relatively large and updating the machine requires destroying it and recreating it, which can cause important data to be lost. Therefore, this process is manual to the extent that the user has to manually enter a command to do it. Pruning Old Versions --------------------- Vagrant does not automatically prune old versions because it does not know if they might be in use by other Vagrant environments. Because boxes can be large, you may want to actively prune them once in a while using `vagrant box remove`. You can see all the boxes that are installed using `vagrant box list`. Another option is to use `vagrant box prune` command to remove all installed boxes that are outdated and not currently in use. vagrant Additional Box Information Additional Box Information =========================== When creating a Vagrant box, you can supply additional information that might be relevant to the user when running `vagrant box list -i`. For example, you could package your box to include information about the author of the box and a website for users to learn more: ``` brian@localghost % vagrant box list -i hashicorp/precise64 (virtualbox, 1.0.0) - author: brian - homepage: https://www.vagrantup.com ``` Box Info --------- To accomplish this, you simply need to include a file named `info.json` when creating a [base box](base) which is a JSON document containing any and all relevant information that will be displayed to the user when the `-i` option is used with `vagrant box list`. ``` { "author": "brian", "homepage": "https://example.com" } ``` There are no special keys or values in `info.json`, and Vagrant will print each key and value on its own line. The [Box File Format](format) provides more information about what else goes into a Vagrant box. vagrant Vagrant VMware Utility Installation Vagrant VMware Utility Installation ==================================== System Packages ---------------- The Vagrant VMware Utility is provided as a system package. To install the utility, download and install the correct system package from the downloads page. [Download 1.0.5](https://www.vagrantup.com/vmware/downloads.html) Manual Installation -------------------- If there is no officially supported system package of the utility available, it may be possible to manually install utility. This applies to Linux platforms only. First, download the latest zip package from the releases page. Next create a directory for the executable and unpack the executable as root. ``` sudo mkdir /opt/vagrant-vmware-utility/bin sudo unzip -d /opt/vagrant-vmware-utility/bin vagrant-vmware-utility_1.0.0_x86_64.zip ``` After the executable has been installed, the utility setup tasks must be run. First, generate the required certificates: ``` sudo /opt/vagrant-vmware-utility/bin/vagrant-vmware-utility certificate generate ``` The path provided from this command can be used to set the [`utility_certificate_path`](configuration#utility_certificate_path) in the Vagrantfile configuration if installing to a non-standard path. Finally, install the service. This will also enable the service. ``` sudo /usr/local/vagrant-vmware-utility/vagrant-vmware-utility service install ``` Usage ====== The Vagrant VMware Utility provides the Vagrant VMware provider plugin access to various VMware functionalities. The Vagrant VMware Utility is required by the Vagrant VMware Desktop provider plugin. Vagrant VMware Utility Access ------------------------------ The Vagrant VMware Utility provides support for all users on the system using the Vagrant VMware Desktop plugin. If access restrictions to the Utility need to be applied to users on the system, this can be accomplished by restricting user access to the certificates used for connecting to the service. On Windows platforms these certificates can be found at: * C:\ProgramData\HashiCorp\vagrant-vmware-desktop\certificates On POSIX platforms these certificates can be found at: * /opt/vagrant-vmware-desktop/certificates Vagrant VMware Utility Service ------------------------------- The Vagrant VMware Utility consists of a small service which runs on the host platform. When the utility installer package is installed, the service is configured to automatically start. If the plugin reports errors communicating with the service, it may have stopped for some reason. The most common cause of the service not being in a running state is the VMware application not being installed. The service can be started again by using the proper command below: ### Windows On Windows platforms a service is created called `vagrant-vmware-utility`. The service can be manually started using the services GUI (`services.msc`) or by running the following command from a `cmd.exe` in administrator mode: ``` > net.exe start vagrant-vmware-utility ``` ### macOS ``` > sudo launchctl load -w /Library/LaunchDaemons/com.vagrant.vagrant-vmware-utility.plist ``` ### Linux systemd ``` > sudo systemctl start vagrant-vmware-utility ``` ### Linux SysVinit ``` > sudo /etc/init.d/vagrant-vmware-utility start ``` ### Linux runit ``` > sudo sv start vagrant-vmware-utility ```
programming_docs
vagrant Boxes Boxes ====== As with [every Vagrant provider](../providers/basic_usage), the Vagrant VMware providers have a custom box format. This page documents the format so that you can create your own base boxes. Note that currently you must make these base boxes by hand. A future release of Vagrant will provide additional mechanisms for automatically creating such images. > **Note:** This is a reasonably advanced topic that a beginning user of Vagrant does not need to understand. If you are just getting started with Vagrant, skip this and use an available box. If you are an experienced user of Vagrant and want to create your own custom boxes, this is for you. > > Prior to reading this page, please understand the [basics of the box file format](../boxes/format). Contents --------- A VMware base box is a compressed archive of the necessary contents of a VMware "vmwarevm" file. Here is an example of what is contained in such a box: ``` $ tree . |-- disk-s001.vmdk |-- disk-s002.vmdk |-- ... |-- disk.vmdk |-- metadata.json |-- precise64.nvram |-- precise64.vmsd |-- precise64.vmx |-- precise64.vmxf 0 directories, 17 files ``` The files that are strictly required for a VMware machine to function are: nvram, vmsd, vmx, vmxf, and vmdk files. There is also the "metadata.json" file used by Vagrant itself. This file contains nothing but the defaults which are documented on the [box format](../boxes/format) page. When bringing up a VMware backed machine, Vagrant copies all of the contents in the box into a privately managed "vmwarevm" folder, and uses the first "vmx" file found to control the machine. > **Vagrant 1.8 and higher support linked clones**. Prior versions of Vagrant do not support linked clones. For more information on linked clones, please see the documentation. > > VMX Whitelisting ----------------- Settings in the VMX file control the behavior of the VMware virtual machine when it is booted. In the past Vagrant has removed the configured network device when creating a new instance and inserted a new configuration. With the introduction of ["predictable network interface names"](https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/) this approach can cause unexpected behaviors or errors with VMware Vagrant boxes. While some boxes that use the predictable network interface names are configured to handle the VMX modifications Vagrant makes, it is better if Vagrant does not make the modification at all. Vagrant will now warn if a whitelisted setting is detected within a Vagrant box VMX file. If it is detected, a warning will be shown alerting the user and providing a configuration snippet. The configuration snippet can be used in the Vagrantfile if Vagrant fails to start the virtual machine. ### Making compatible boxes These are the VMX settings the whitelisting applies to: * [`ethernet*.pcislotnumber`](#ethernet-pcislotnumber) If the newly created box does not depend on Vagrant's existing behavior of modifying this setting, it can disable Vagrant from applying the modification by adding a Vagrantfile to the box with the following content: ``` Vagrant.configure("2") do |config| ["vmware_workstation", "vmware_fusion"].each do |vmware_provider| config.vm.provider(vmware_provider) do |vmware| vmware.whitelist_verified = true end end end ``` This will prevent Vagrant from displaying a warning to the user as well as disable the VMX settings modifications. Installed Software ------------------- Base boxes for VMware should have the following software installed, as a bare minimum: * SSH server with key-based authentication setup. If you want the box to work with default Vagrant settings, the SSH user must be set to accept the [insecure keypair](https://github.com/hashicorp/vagrant/blob/master/keys/vagrant.pub) that ships with Vagrant. * [VMware Tools](https://kb.vmware.com/kb/340) so that things such as shared folders can function. There are many other benefits to installing the tools, such as improved networking performance. Optimizing Box Size -------------------- Prior to packaging up a box, you should shrink the hard drives as much as possible. This can be done with `vmware-vdiskmanager` which is usually found in `/Applications/VMware Fusion.app/Contents/Library` for VMware Fusion. You first want to defragment then shrink the drive. Usage shown below: ``` $ vmware-vdiskmanager -d /path/to/main.vmdk ... $ vmware-vdiskmanager -k /path/to/main.vmdk ... ``` Packaging ---------- Remove any extraneous files from the "vmwarevm" folder and package it. Be sure to compress the tar with gzip (done below in a single command) since VMware hard disks are not compressed by default. ``` $ cd /path/to/my/vm.vmwarevm $ tar cvzf custom.box ./* ``` vagrant VMware VMware ======= [HashiCorp](https://www.hashicorp.com) develops an official [VMware Fusion](https://www.vmware.com/products/fusion/overview.html) and [VMware Workstation](https://www.vmware.com/products/workstation/) [provider](../providers/index) for Vagrant. This provider allows Vagrant to power VMware based machines and take advantage of the improved stability and performance that VMware software offers. Learn more about the VMware providers on the [VMware provider](https://www.vagrantup.com/vmware) page on the Vagrant website. This provider is a drop-in replacement for VirtualBox, meaning that every VirtualBox feature that Vagrant supports is fully functional in VMware as well. However, there are some VMware-specific things such as box formats, configurations, etc. that are documented here. For the most up-to-date information on compatibility and supported versions of VMware Fusion and VMware Workstation, please visit the [Vagrant VMware product page](https://www.vagrantup.com/vmware). Please note that VMware Fusion and VMware Workstation are third-party products that must be purchased and installed separately prior to using the provider. Use the navigation to the left to find a specific VMware topic to read more about. vagrant Kernel Upgrade Kernel Upgrade =============== If as part of running your Vagrant environment with VMware, you perform a kernel upgrade, it is likely that the VMware guest tools will stop working. This breaks features of Vagrant such as synced folders and sometimes networking as well. This page documents how to upgrade your kernel and keep your guest tools functioning. If you are not planning to upgrade your kernel, then you can safely skip this page. Enable Auto-Upgrade of VMware Tools ------------------------------------ If you are running a common OS, VMware tools can often auto-upgrade themselves. This setting is disabled by default. The Vagrantfile settings below will enable auto-upgrading: ``` # Ensure that VMWare Tools recompiles kernel modules # when we update the linux images $fix_vmware_tools_script = <<SCRIPT sed -i.bak 's/answer AUTO_KMODS_ENABLED_ANSWER no/answer AUTO_KMODS_ENABLED_ANSWER yes/g' /etc/vmware-tools/locations sed -i 's/answer AUTO_KMODS_ENABLED no/answer AUTO_KMODS_ENABLED yes/g' /etc/vmware-tools/locations SCRIPT Vagrant.configure("2") do |config| # ... config.vm.provision "shell", inline: $fix_vmware_tools_script end ``` Note that this does not work for every OS, so `vagrant up` with the above settings, do a kernel upgrade, and do a `vagrant reload`. If HGFS (synced folders) and everything appears to be working, great! If not, then read on... Manually Reinstalling VMware Tools ----------------------------------- At this point, you will have to manually reinstall VMware tools. The best source of information for how to do this is the [VMware documentation](https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1018414). There are some gotchas: * Make sure you have the kernel headers properly installed. This varies by distro but is generally a package available via the package manager. * Watch the installation output carefully. Even if HGFS (synced folders) support failed to build, the installer will output that installing VMware tools was successful. Read the output to find any error messages. vagrant Configuration Configuration ============== While Vagrant VMware Desktop provider is a drop-in replacement for VirtualBox, there are some additional features that are exposed that allow you to more finely configure VMware-specific aspects of your machines. Configuration settings for the provider are set in the Vagrantfile: ``` Vagrant.configure("2") do |config| config.vm.box = "my-box" config.vm.provider "vmware_desktop" do |v| v.gui = true end end ``` Provider settings ------------------ * [`clone_directory`](#clone_directory) (string) - Path for storing VMware clones. This value can also be set using the `VAGRANT_VMWARE_CLONE_DIRECTORY` environment variable. This defaults to `./.vagrant` * [`enable_vmrun_ip_lookup`](#enable_vmrun_ip_lookup) (bool) - Use vmrun to discover guest IP address. This defaults to `true` * [`functional_hgfs`](#functional_hgfs) (bool) - HGFS is functional within the guest. This defaults to detected capability of the guest * [`unmount_default_hgfs`](#unmount_default_hgfs) (bool) - Unmount the default HGFS mount point within the guest. This defaults to `false` * [`gui`](#gui) (bool) - Launch guest with a GUI. This defaults to `false` * [`ssh_info_public`](#ssh_info_public) (bool) - Use the public IP address for SSH connections to guest. This defaults to `false` * [`verify_vmnet`](#verify_vmnet) (bool) - Verify vmnet devices health before usage. This defaults to `true` * [`linked_clone`](#linked_clone) (bool) - Use linked clones instead of full copy clones. This defaults to `true` * [`vmx`](#vmx) (hash) - VMX key/value pairs to set or unset. If the value is `nil`, the key will be deleted. * [`whitelist_verified`](#whitelist_verified) (bool, symbol) - Flag that VMware box has been properly configured for whitelisted VMX settings. `true` if verified, `false` if unverified, `:disable_warning` to silence whitelist warnings. * [`port_forward_network_pause`](#port_forward_network_pause) - Number of seconds to pause after applying port forwarding configuration. This allows guest time to acquire DHCP address if previous address is dropped when VMware network services are restarted. This defaults to `0` * [`utility_port`](#utility_port) (integer) - Listen port of the Vagrant VMware Utility service. This defaults to `9922` * [`utility_certificate_path`](#utility_certificate_path) (string) - Path to the Vagrant VMware Utility service certificates directory. The default value is dependent on the host ### VM Clone Directory By default, the VMware provider will clone the VMware VM in the box to the ".vagrant" folder relative to the folder where the Vagrantfile is. Usually, this is fine. For some people, for example those who use a differential backup software such as Time Machine, this is very annoying because you cannot regularly ignore giant virtual machines as part of backups. The directory where the provider clones the virtual machine can be customized by setting the `VAGRANT_VMWARE_CLONE_DIRECTORY` environmental variable. This does not need to be unique per project. Each project will get a different sub-directory within this folder. Therefore, it is safe to set this systemwide. ### Linked Clones By default new machines are created using a linked clone to the base box. This reduces the time and required disk space incurred by directly importing the base box. Linked clones are based on a master VM, which is generated by importing the base box only once the first time it is required. For the linked clones only differencing disk images are created where the parent disk image belongs to the master VM. To disable linked clones: ``` config.vm.provider "vmware_desktop" do |v| v.linked_clone = false end ``` ### VMX Customization If you want to add or remove specific keys from the VMX file, you can do that: ``` config.vm.provider "vmware_desktop" do |v| v.vmx["custom-key"] = "value" v.vmx["another-key"] = nil end ``` In the example above, the "custom-key" key will be set to "value" and the "another-key" key will be removed from the VMX file. VMX customization is done as the final step before the VMware machine is booted, so you have the ability to possibly undo or misconfigure things that Vagrant has set up itself. VMX is an undocumented format and there is no official reference for the available keys and values. This customization option is exposed for people who have knowledge of exactly what they want. The most common keys people look for are setting memory and CPUs. The example below sets both: ``` config.vm.provider "vmware_desktop" do |v| v.vmx["memsize"] = "1024" v.vmx["numvcpus"] = "2" end ``` vagrant Known Issues Known Issues ============= This page tracks some known issues or limitations of the VMware provider. Note that none of these are generally blockers to using the provider, but are good to know. Network disconnect ------------------- When Vagrant applies port forwarding rules while bring up a guest instance, other running VMware VMs may experience a loss of network connectivity. The cause of this connectivity issue is the restarting of the VMware NAT service to apply new port forwarding rules. Since new rules cannot be applied to the NAT service while it is running, it is required to restart the service, which results in the loss of connectivity. Forwarded Ports Failing in Workstation on Windows -------------------------------------------------- VMware Workstation has a bug on Windows where forwarded ports do not work properly. Vagrant actually works around this bug and makes them work. However, if you run the virtual network editor on Windows, the forwarded ports will suddenly stop working. In this case, run `vagrant reload` and things will begin working again. This issue has been reported to VMware, but a fix has not been released yet. vagrant Installation Installation ============= If you are upgrading from the Vagrant VMware Workstation or Vagrant VMware Fusion plugins, please halt or destroy all VMware VMs currently being managed by Vagrant. Then continue with the instructions below. Installation of the Vagrant VMware provider requires two steps. First the Vagrant VMware Utility must be installed. This can be done by downloading and installing the correct system package from the [Vagrant VMware Utility downloads page](https://www.vagrantup.com/vmware/downloads.html). Next, install the Vagrant VMware provider plugin using the standard plugin installation procedure: ``` $ vagrant plugin install vagrant-vmware-desktop ``` For more information on plugin installation, please see the [Vagrant plugin usage documentation](../plugins/usage). The Vagrant VMware plugin is a commercial product provided by [HashiCorp](https://www.hashicorp.com) and **require the purchase of a license** to operate. To purchase a license, please visit the [Vagrant VMware provider](https://www.vagrantup.com/vmware#buy-now) page. Upon purchasing a license, you will receive a license file in your inbox. Download this file and save it to a temporary location on your computer. > **Warning!** You cannot use your VMware product license as a Vagrant VMware plugin license. They are separate commercial products, each requiring their own license. > > After installing the Vagrant VMware Desktop plugin for your system, you will need to install the license: ``` $ vagrant plugin license vagrant-vmware-desktop ~/license.lic ``` The first parameter is the name of the plugin, and the second parameter is the path to the license file on disk. Please be sure to replace `~/license.lic` with the path where you temporarily saved the downloaded license file to disk. After you have installed the plugin license, you may remove the temporary file. To verify the license installation, run: ``` $ vagrant ``` If the license is not installed correctly, you will see an error message. Upgrading to v1.x ------------------ It is **extremely important** that the VMware plugin is upgraded to 1.0.0 or above. This release resolved critical security vulnerabilities. To learn more, please [read our release announcement](https://www.hashicorp.com/blog/introducing-the-vagrant-vmware-desktop-plugin). After upgrading, please verify that the following paths are empty. The upgrade process should remove these for you, but for security reasons it is important to double check. If you're a new user or installing the VMware provider on a new machine, you may skip this step. If you're a Windows user, you may skip this step as well. The path `~/.vagrant.d/gems/*/vagrant-vmware-{fusion,workstation}` should no longer exist. The gem `vagrant-vmware-desktop` may exist since this is the name of the new plugin. If the old directories exist, remove them. An example for a Unix-like shell is shown below: ``` # Check if they exist and verify that they're the correct paths as shown below. $ ls ~/.vagrant.d/gems/*/vagrant-vmware-{fusion,workstation} ... # Remove them $ rm -rf ~/.vagrant.d/gems/*/vagrant-vmware-{fusion,workstation} ``` Updating the Vagrant VMware Desktop plugin ------------------------------------------- The Vagrant VMware Desktop plugin can be updated directly from Vagrant. Run the following command to update Vagrant to the latest version of the Vagrant VMware Desktop plugin: ``` $ vagrant plugin update vagrant-vmware-desktop ``` Frequently Asked Questions --------------------------- **Q: I purchased a Vagrant VMware plugin license, but I did not receive an email?** First, please check your JUNK or SPAM folders. Since the license comes from an automated system, it might have been flagged as spam by your email provider. If you do not see the email there, please [contact support](mailto:[email protected]?subject=License%20Not%20Received) and include the original order number. **Q: Do I need to keep the Vagrant VMware plugin license file on disk?** After you have installed the Vagrant VMware plugin license, it is safe to remove your copy from disk. Vagrant copies the license into its structure for reference on boot. **Q: I lost my original email, where can I download my Vagrant VMware plugin license again?** Please [contact support](mailto:[email protected]?subject=Lost%20My%20License&body=Hello%20support!%20I%20seem%20to%20have%20misplaced%20my%20Vagrant%20VMware%20license.%20Could%20you%20please%20send%20it%20to%20me?%20Thanks!). **Note:** please contact support using the email address with which you made the original purchase. If you use an alternate email, you will be asked to verify that you are the owner of the requested license. **Q: I upgraded my VMware product and now my license is invalid?** The Vagrant VMware plugin licenses are valid for specific VMware product versions at the time of purchase. When new versions of VMware products are released, significant changes to the plugin code are often required to support this new version. For this reason, you may need to upgrade your current license to work with the new version of the VMware product. Customers can check their license upgrade eligibility by visiting the [License Upgrade Center](https://license.hashicorp.com/upgrade/vmware) and entering the email address with which they made the original purchase. Your existing license will continue to work with all previous versions of the VMware products. If you do not wish to update at this time, you can rollback your VMware installation to an older version. **Q: Why is the Vagrant VMware plugin not working with my trial version of VMware Fusion/Workstation?** The Vagrant VMware Fusion and Vagrant VMware Workstation plugins are not compatible with trial versions of the VMware products. We apologize for the inconvenience. **Q: How do I upgrade my currently installed Vagrant VMware plugin?** You can update the Vagrant VMware plugin to the latest version by re-running the install command: ``` $ vagrant plugin install vagrant-vmware-desktop ``` Support -------- If you have any issues purchasing, installing, or using the Vagrant VMware plugins, please [contact support](mailto:[email protected]). To expedite the support process, please include the [Vagrant debug output](../other/debugging) as a Gist if applicable. This will help us more quickly diagnose your issue.
programming_docs
vagrant Usage Usage ====== The Vagrant VMware provider is used just like any other provider. Please read the general [basic usage](../providers/basic_usage) page for providers. The value to use for the `--provider` flag is `vmware_desktop`. For compatibility with older versions of the plugin, `vmware_fusion` can be used for VMware Fusion, and `vmware_workstation` for VMware Workstation. The Vagrant VMware provider does not support parallel execution at this time. Specifying the `--parallel` option will have no effect. To get started, create a new `Vagrantfile` that points to a VMware box: ``` # vagrant init hashicorp/precise64 Vagrant.configure("2") do |config| config.vm.box = "hashicorp/precise64" end ``` Then run: ``` $ vagrant up --provider vmware_desktop ``` This will download and bring up a new VMware Fusion/Workstation virtual machine in Vagrant. haproxy Starter Guide Starter Guide ============= ``` This document doesn't provide any configuration help or hints, but it explains where to find the relevant documents. The summary below is meant to help you search sections by name and navigate through the document. Note to documentation contributors : This document is formatted with 80 columns per line, with even number of spaces for indentation and without tabs. Please follow these rules strictly so that it remains easily printable everywhere. If you add sections, please update the summary below for easier searching. ``` 1. Available documentation --------------------------- ``` The complete HAProxy documentation is contained in the following documents. Please ensure to consult the relevant documentation to save time and to get the most accurate response to your needs. Also please refrain from sending questions to the mailing list whose responses are present in these documents. - intro.txt (this document) : it presents the basics of load balancing, HAProxy as a product, what it does, what it doesn't do, some known traps to avoid, some OS-specific limitations, how to get it, how it evolves, how to ensure you're running with all known fixes, how to update it, complements and alternatives. - management.txt : it explains how to start haproxy, how to manage it at runtime, how to manage it on multiple nodes, and how to proceed with seamless upgrades. - configuration.txt : the reference manual details all configuration keywords and their options. It is used when a configuration change is needed. - coding-style.txt : this is for developers who want to propose some code to the project. It explains the style to adopt for the code. It is not very strict and not all the code base completely respects it, but contributions which diverge too much from it will be rejected. - proxy-protocol.txt : this is the de-facto specification of the PROXY protocol which is implemented by HAProxy and a number of third party products. - README : how to build HAProxy from sources ``` 2. Quick introduction to load balancing and load balancers ----------------------------------------------------------- ``` Load balancing consists in aggregating multiple components in order to achieve a total processing capacity above each component's individual capacity, without any intervention from the end user and in a scalable way. This results in more operations being performed simultaneously by the time it takes a component to perform only one. A single operation however will still be performed on a single component at a time and will not get faster than without load balancing. It always requires at least as many operations as available components and an efficient load balancing mechanism to make use of all components and to fully benefit from the load balancing. A good example of this is the number of lanes on a highway which allows as many cars to pass during the same time frame without increasing their individual speed. Examples of load balancing : - Process scheduling in multi-processor systems - Link load balancing (e.g. EtherChannel, Bonding) - IP address load balancing (e.g. ECMP, DNS round-robin) - Server load balancing (via load balancers) The mechanism or component which performs the load balancing operation is called a load balancer. In web environments these components are called a "network load balancer", and more commonly a "load balancer" given that this activity is by far the best known case of load balancing. A load balancer may act : - at the link level : this is called link load balancing, and it consists in choosing what network link to send a packet to; - at the network level : this is called network load balancing, and it consists in choosing what route a series of packets will follow; - at the server level : this is called server load balancing and it consists in deciding what server will process a connection or request. Two distinct technologies exist and address different needs, though with some overlapping. In each case it is important to keep in mind that load balancing consists in diverting the traffic from its natural flow and that doing so always requires a minimum of care to maintain the required level of consistency between all routing decisions. The first one acts at the packet level and processes packets more or less individually. There is a 1-to-1 relation between input and output packets, so it is possible to follow the traffic on both sides of the load balancer using a regular network sniffer. This technology can be very cheap and extremely fast. It is usually implemented in hardware (ASICs) allowing to reach line rate, such as switches doing ECMP. Usually stateless, it can also be stateful (consider the session a packet belongs to and called layer4-LB or L4), may support DSR (direct server return, without passing through the LB again) if the packets were not modified, but provides almost no content awareness. This technology is very well suited to network-level load balancing, though it is sometimes used for very basic server load balancing at high speed. The second one acts on session contents. It requires that the input streams is reassembled and processed as a whole. The contents may be modified, and the output stream is segmented into new packets. For this reason it is generally performed by proxies and they're often called layer 7 load balancers or L7. This implies that there are two distinct connections on each side, and that there is no relation between input and output packets sizes nor counts. Clients and servers are not required to use the same protocol (for example IPv4 vs IPv6, clear vs SSL). The operations are always stateful, and the return traffic must pass through the load balancer. The extra processing comes with a cost so it's not always possible to achieve line rate, especially with small packets. On the other hand, it offers wide possibilities and is generally achieved by pure software, even if embedded into hardware appliances. This technology is very well suited for server load balancing. Packet-based load balancers are generally deployed in cut-through mode, so they are installed on the normal path of the traffic and divert it according to the configuration. The return traffic doesn't necessarily pass through the load balancer. Some modifications may be applied to the network destination address in order to direct the traffic to the proper destination. In this case, it is mandatory that the return traffic passes through the load balancer. If the routes doesn't make this possible, the load balancer may also replace the packets' source address with its own in order to force the return traffic to pass through it. Proxy-based load balancers are deployed as a server with their own IP addresses and ports, without architecture changes. Sometimes this requires to perform some adaptations to the applications so that clients are properly directed to the load balancer's IP address and not directly to the server's. Some load balancers may have to adjust some servers' responses to make this possible (e.g. the HTTP Location header field used in HTTP redirects). Some proxy-based load balancers may intercept traffic for an address they don't own, and spoof the client's address when connecting to the server. This allows them to be deployed as if they were a regular router or firewall, in a cut-through mode very similar to the packet based load balancers. This is particularly appreciated for products which combine both packet mode and proxy mode. In this case DSR is obviously still not possible and the return traffic still has to be routed back to the load balancer. A very scalable layered approach would consist in having a front router which receives traffic from multiple load balanced links, and uses ECMP to distribute this traffic to a first layer of multiple stateful packet-based load balancers (L4). These L4 load balancers in turn pass the traffic to an even larger number of proxy-based load balancers (L7), which have to parse the contents to decide what server will ultimately receive the traffic. The number of components and possible paths for the traffic increases the risk of failure; in very large environments, it is even normal to permanently have a few faulty components being fixed or replaced. Load balancing done without awareness of the whole stack's health significantly degrades availability. For this reason, any sane load balancer will verify that the components it intends to deliver the traffic to are still alive and reachable, and it will stop delivering traffic to faulty ones. This can be achieved using various methods. The most common one consists in periodically sending probes to ensure the component is still operational. These probes are called "health checks". They must be representative of the type of failure to address. For example a ping- based check will not detect that a web server has crashed and doesn't listen to a port anymore, while a connection to the port will verify this, and a more advanced request may even validate that the server still works and that the database it relies on is still accessible. Health checks often involve a few retries to cover for occasional measuring errors. The period between checks must be small enough to ensure the faulty component is not used for too long after an error occurs. Other methods consist in sampling the production traffic sent to a destination to observe if it is processed correctly or not, and to evict the components which return inappropriate responses. However this requires to sacrifice a part of the production traffic and this is not always acceptable. A combination of these two mechanisms provides the best of both worlds, with both of them being used to detect a fault, and only health checks to detect the end of the fault. A last method involves centralized reporting : a central monitoring agent periodically updates all load balancers about all components' state. This gives a global view of the infrastructure to all components, though sometimes with less accuracy or responsiveness. It's best suited for environments with many load balancers and many servers. Layer 7 load balancers also face another challenge known as stickiness or persistence. The principle is that they generally have to direct multiple subsequent requests or connections from a same origin (such as an end user) to the same target. The best known example is the shopping cart on an online store. If each click leads to a new connection, the user must always be sent to the server which holds his shopping cart. Content-awareness makes it easier to spot some elements in the request to identify the server to deliver it to, but that's not always enough. For example if the source address is used as a key to pick a server, it can be decided that a hash-based algorithm will be used and that a given IP address will always be sent to the same server based on a divide of the address by the number of available servers. But if one server fails, the result changes and all users are suddenly sent to a different server and lose their shopping cart. The solution against this issue consists in memorizing the chosen target so that each time the same visitor is seen, he's directed to the same server regardless of the number of available servers. The information may be stored in the load balancer's memory, in which case it may have to be replicated to other load balancers if it's not alone, or it may be stored in the client's memory using various methods provided that the client is able to present this information back with every request (cookie insertion, redirection to a sub-domain, etc). This mechanism provides the extra benefit of not having to rely on unstable or unevenly distributed information (such as the source IP address). This is in fact the strongest reason to adopt a layer 7 load balancer instead of a layer 4 one. In order to extract information such as a cookie, a host header field, a URL or whatever, a load balancer may need to decrypt SSL/TLS traffic and even possibly to re-encrypt it when passing it to the server. This expensive task explains why in some high-traffic infrastructures, sometimes there may be a lot of load balancers. Since a layer 7 load balancer may perform a number of complex operations on the traffic (decrypt, parse, modify, match cookies, decide what server to send to, etc), it can definitely cause some trouble and will very commonly be accused of being responsible for a lot of trouble that it only revealed. Often it will be discovered that servers are unstable and periodically go up and down, or for web servers, that they deliver pages with some hard-coded links forcing the clients to connect directly to one specific server without passing via the load balancer, or that they take ages to respond under high load causing timeouts. That's why logging is an extremely important aspect of layer 7 load balancing. Once a trouble is reported, it is important to figure if the load balancer took a wrong decision and if so why so that it doesn't happen anymore. ``` 3. Introduction to HAProxy --------------------------- ``` HAProxy is written as "HAProxy" to designate the product, and as "haproxy" to designate the executable program, software package or a process. However, both are commonly used for both purposes, and are pronounced H-A-Proxy. Very early, "haproxy" used to stand for "high availability proxy" and the name was written in two separate words, though by now it means nothing else than "HAProxy". ``` ### 3.1. What HAProxy is and isn't ``` HAProxy is : - a TCP proxy : it can accept a TCP connection from a listening socket, connect to a server and attach these sockets together allowing traffic to flow in both directions; IPv4, IPv6 and even UNIX sockets are supported on either side, so this can provide an easy way to translate addresses between different families. - an HTTP reverse-proxy (called a "gateway" in HTTP terminology) : it presents itself as a server, receives HTTP requests over connections accepted on a listening TCP socket, and passes the requests from these connections to servers using different connections. It may use any combination of HTTP/1.x or HTTP/2 on any side and will even automatically detect the protocol spoken on each side when ALPN is used over TLS. - an SSL terminator / initiator / offloader : SSL/TLS may be used on the connection coming from the client, on the connection going to the server, or even on both connections. A lot of settings can be applied per name (SNI), and may be updated at runtime without restarting. Such setups are extremely scalable and deployments involving tens to hundreds of thousands of certificates were reported. - a TCP normalizer : since connections are locally terminated by the operating system, there is no relation between both sides, so abnormal traffic such as invalid packets, flag combinations, window advertisements, sequence numbers, incomplete connections (SYN floods), or so will not be passed to the other side. This protects fragile TCP stacks from protocol attacks, and also allows to optimize the connection parameters with the client without having to modify the servers' TCP stack settings. - an HTTP normalizer : when configured to process HTTP traffic, only valid complete requests are passed. This protects against a lot of protocol-based attacks. Additionally, protocol deviations for which there is a tolerance in the specification are fixed so that they don't cause problem on the servers (e.g. multiple-line headers). - an HTTP fixing tool : it can modify / fix / add / remove / rewrite the URL or any request or response header. This helps fixing interoperability issues in complex environments. - a content-based switch : it can consider any element from the request to decide what server to pass the request or connection to. Thus it is possible to handle multiple protocols over a same port (e.g. HTTP, HTTPS, SSH). - a server load balancer : it can load balance TCP connections and HTTP requests. In TCP mode, load balancing decisions are taken for the whole connection. In HTTP mode, decisions are taken per request. - a traffic regulator : it can apply some rate limiting at various points, protect the servers against overloading, adjust traffic priorities based on the contents, and even pass such information to lower layers and outer network components by marking packets. - a protection against DDoS and service abuse : it can maintain a wide number of statistics per IP address, URL, cookie, etc and detect when an abuse is happening, then take action (slow down the offenders, block them, send them to outdated contents, etc). - an observation point for network troubleshooting : due to the precision of the information reported in logs, it is often used to narrow down some network-related issues. - an HTTP compression offloader : it can compress responses which were not compressed by the server, thus reducing the page load time for clients with poor connectivity or using high-latency, mobile networks. - a caching proxy : it may cache responses in RAM so that subsequent requests for the same object avoid the cost of another network transfer from the server as long as the object remains present and valid. It will however not store objects to any persistent storage. Please note that this caching feature is designed to be maintenance free and focuses solely on saving haproxy's precious resources and not on save the server's resources. Caches designed to optimize servers require much more tuning and flexibility. If you instead need such an advanced cache, please use Varnish Cache, which integrates perfectly with haproxy, especially when SSL/TLS is needed on any side. - a FastCGI gateway : FastCGI can be seen as a different representation of HTTP, and as such, HAProxy can directly load-balance a farm comprising any combination of FastCGI application servers without requiring to insert another level of gateway between them. This results in resource savings and a reduction of maintenance costs. HAProxy is not : - an explicit HTTP proxy, i.e. the proxy that browsers use to reach the internet. There are excellent open-source software dedicated for this task, such as Squid. However HAProxy can be installed in front of such a proxy to provide load balancing and high availability. - a data scrubber : it will not modify the body of requests nor responses. - a static web server : during startup, it isolates itself inside a chroot jail and drops its privileges, so that it will not perform any single file- system access once started. As such it cannot be turned into a static web server (dynamic servers are supported through FastCGI however). There are excellent open-source software for this such as Apache or Nginx, and HAProxy can be easily installed in front of them to provide load balancing, high availability and acceleration. - a packet-based load balancer : it will not see IP packets nor UDP datagrams, will not perform NAT or even less DSR. These are tasks for lower layers. Some kernel-based components such as IPVS (Linux Virtual Server) already do this pretty well and complement perfectly with HAProxy. ``` ### 3.2. How HAProxy works ``` HAProxy is an event-driven, non-blocking engine combining a very fast I/O layer with a priority-based, multi-threaded scheduler. As it is designed with a data forwarding goal in mind, its architecture is optimized to move data as fast as possible with the least possible operations. It focuses on optimizing the CPU cache's efficiency by sticking connections to the same CPU as long as possible. As such it implements a layered model offering bypass mechanisms at each level ensuring data doesn't reach higher levels unless needed. Most of the processing is performed in the kernel, and HAProxy does its best to help the kernel do the work as fast as possible by giving some hints or by avoiding certain operation when it guesses they could be grouped later. As a result, typical figures show 15% of the processing time spent in HAProxy versus 85% in the kernel in TCP or HTTP close mode, and about 30% for HAProxy versus 70% for the kernel in HTTP keep-alive mode. A single process can run many proxy instances; configurations as large as 300000 distinct proxies in a single process were reported to run fine. A single core, single CPU setup is far more than enough for more than 99% users, and as such, users of containers and virtual machines are encouraged to use the absolute smallest images they can get to save on operational costs and simplify troubleshooting. However the machine HAProxy runs on must never ever swap, and its CPU must not be artificially throttled (sub-CPU allocation in hypervisors) nor be shared with compute-intensive processes which would induce a very high context-switch latency. Threading allows to exploit all available processing capacity by using one thread per CPU core. This is mostly useful for SSL or when data forwarding rates above 40 Gbps are needed. In such cases it is critically important to avoid communications between multiple physical CPUs, which can cause strong bottlenecks in the network stack and in HAProxy itself. While counter-intuitive to some, the first thing to do when facing some performance issues is often to reduce the number of CPUs HAProxy runs on. HAProxy only requires the haproxy executable and a configuration file to run. For logging it is highly recommended to have a properly configured syslog daemon and log rotations in place. Logs may also be sent to stdout/stderr, which can be useful inside containers. The configuration files are parsed before starting, then HAProxy tries to bind all listening sockets, and refuses to start if anything fails. Past this point it cannot fail anymore. This means that there are no runtime failures and that if it accepts to start, it will work until it is stopped. Once HAProxy is started, it does exactly 3 things : - process incoming connections; - periodically check the servers' status (known as health checks); - exchange information with other haproxy nodes. Processing incoming connections is by far the most complex task as it depends on a lot of configuration possibilities, but it can be summarized as the 9 steps below : - accept incoming connections from listening sockets that belong to a configuration entity known as a "frontend", which references one or multiple listening addresses; - apply the frontend-specific processing rules to these connections that may result in blocking them, modifying some headers, or intercepting them to execute some internal applets such as the statistics page or the CLI; - pass these incoming connections to another configuration entity representing a server farm known as a "backend", which contains the list of servers and the load balancing strategy for this server farm; - apply the backend-specific processing rules to these connections; - decide which server to forward the connection to according to the load balancing strategy; - apply the backend-specific processing rules to the response data; - apply the frontend-specific processing rules to the response data; - emit a log to report what happened in fine details; - in HTTP, loop back to the second step to wait for a new request, otherwise close the connection. Frontends and backends are sometimes considered as half-proxies, since they only look at one side of an end-to-end connection; the frontend only cares about the clients while the backend only cares about the servers. HAProxy also supports full proxies which are exactly the union of a frontend and a backend. When HTTP processing is desired, the configuration will generally be split into frontends and backends as they open a lot of possibilities since any frontend may pass a connection to any backend. With TCP-only proxies, using frontends and backends rarely provides a benefit and the configuration can be more readable with full proxies. ``` ### 3.3. Basic features ``` This section will enumerate a number of features that HAProxy implements, some of which are generally expected from any modern load balancer, and some of which are a direct benefit of HAProxy's architecture. More advanced features will be detailed in the next section. ``` #### 3.3.1. Basic features : Proxying ``` Proxying is the action of transferring data between a client and a server over two independent connections. The following basic features are supported by HAProxy regarding proxying and connection management : - Provide the server with a clean connection to protect them against any client-side defect or attack; - Listen to multiple IP addresses and/or ports, even port ranges; - Transparent accept : intercept traffic targeting any arbitrary IP address that doesn't even belong to the local system; - Server port doesn't need to be related to listening port, and may even be translated by a fixed offset (useful with ranges); - Transparent connect : spoof the client's (or any) IP address if needed when connecting to the server; - Provide a reliable return IP address to the servers in multi-site LBs; - Offload the server thanks to buffers and possibly short-lived connections to reduce their concurrent connection count and their memory footprint; - Optimize TCP stacks (e.g. SACK), congestion control, and reduce RTT impacts; - Support different protocol families on both sides (e.g. IPv4/IPv6/Unix); - Timeout enforcement : HAProxy supports multiple levels of timeouts depending on the stage the connection is, so that a dead client or server, or an attacker cannot be granted resources for too long; - Protocol validation: HTTP, SSL, or payload are inspected and invalid protocol elements are rejected, unless instructed to accept them anyway; - Policy enforcement : ensure that only what is allowed may be forwarded; - Both incoming and outgoing connections may be limited to certain network namespaces (Linux only), making it easy to build a cross-container, multi-tenant load balancer; - PROXY protocol presents the client's IP address to the server even for non-HTTP traffic. This is an HAProxy extension that was adopted by a number of third-party products by now, at least these ones at the time of writing : - client : haproxy, stud, stunnel, exaproxy, ELB, squid - server : haproxy, stud, postfix, exim, nginx, squid, node.js, varnish ``` #### 3.3.2. Basic features : SSL ``` HAProxy's SSL stack is recognized as one of the most featureful according to Google's engineers (http://istlsfastyet.com/). The most commonly used features making it quite complete are : - SNI-based multi-hosting with no limit on sites count and focus on performance. At least one deployment is known for running 50000 domains with their respective certificates; - support for wildcard certificates reduces the need for many certificates ; - certificate-based client authentication with configurable policies on failure to present a valid certificate. This allows to present a different server farm to regenerate the client certificate for example; - authentication of the backend server ensures the backend server is the real one and not a man in the middle; - authentication with the backend server lets the backend server know it's really the expected haproxy node that is connecting to it; - TLS NPN and ALPN extensions make it possible to reliably offload SPDY/HTTP2 connections and pass them in clear text to backend servers; - OCSP stapling further reduces first page load time by delivering inline an OCSP response when the client requests a Certificate Status Request; - Dynamic record sizing provides both high performance and low latency, and significantly reduces page load time by letting the browser start to fetch new objects while packets are still in flight; - permanent access to all relevant SSL/TLS layer information for logging, access control, reporting etc. These elements can be embedded into HTTP header or even as a PROXY protocol extension so that the offloaded server gets all the information it would have had if it performed the SSL termination itself. - Detect, log and block certain known attacks even on vulnerable SSL libs, such as the Heartbleed attack affecting certain versions of OpenSSL. - support for stateless session resumption (RFC 5077 TLS Ticket extension). TLS tickets can be updated from CLI which provides them means to implement Perfect Forward Secrecy by frequently rotating the tickets. ``` #### 3.3.3. Basic features : Monitoring ``` HAProxy focuses a lot on availability. As such it cares about servers state, and about reporting its own state to other network components : - Servers' state is continuously monitored using per-server parameters. This ensures the path to the server is operational for regular traffic; - Health checks support two hysteresis for up and down transitions in order to protect against state flapping; - Checks can be sent to a different address/port/protocol : this makes it easy to check a single service that is considered representative of multiple ones, for example the HTTPS port for an HTTP+HTTPS server. - Servers can track other servers and go down simultaneously : this ensures that servers hosting multiple services can fail atomically and that no one will be sent to a partially failed server; - Agents may be deployed on the server to monitor load and health : a server may be interested in reporting its load, operational status, administrative status independently from what health checks can see. By running a simple agent on the server, it's possible to consider the server's view of its own health in addition to the health checks validating the whole path; - Various check methods are available : TCP connect, HTTP request, SMTP hello, SSL hello, LDAP, SQL, Redis, send/expect scripts, all with/without SSL; - State change is notified in the logs and stats page with the failure reason (e.g. the HTTP response received at the moment the failure was detected). An e-mail can also be sent to a configurable address upon such a change ; - Server state is also reported on the stats interface and can be used to take routing decisions so that traffic may be sent to different farms depending on their sizes and/or health (e.g. loss of an inter-DC link); - HAProxy can use health check requests to pass information to the servers, such as their names, weight, the number of other servers in the farm etc. so that servers can adjust their response and decisions based on this knowledge (e.g. postpone backups to keep more CPU available); - Servers can use health checks to report more detailed state than just on/off (e.g. I would like to stop, please stop sending new visitors); - HAProxy itself can report its state to external components such as routers or other load balancers, allowing to build very complete multi-path and multi-layer infrastructures. ``` #### 3.3.4. Basic features : High availability ``` Just like any serious load balancer, HAProxy cares a lot about availability to ensure the best global service continuity : - Only valid servers are used ; the other ones are automatically evicted from load balancing farms ; under certain conditions it is still possible to force to use them though; - Support for a graceful shutdown so that it is possible to take servers out of a farm without affecting any connection; - Backup servers are automatically used when active servers are down and replace them so that sessions are not lost when possible. This also allows to build multiple paths to reach the same server (e.g. multiple interfaces); - Ability to return a global failed status for a farm when too many servers are down. This, combined with the monitoring capabilities makes it possible for an upstream component to choose a different LB node for a given service; - Stateless design makes it easy to build clusters : by design, HAProxy does its best to ensure the highest service continuity without having to store information that could be lost in the event of a failure. This ensures that a takeover is the most seamless possible; - Integrates well with standard VRRP daemon keepalived : HAProxy easily tells keepalived about its state and copes very well with floating virtual IP addresses. Note: only use IP redundancy protocols (VRRP/CARP) over cluster- based solutions (Heartbeat, ...) as they're the ones offering the fastest, most seamless, and most reliable switchover. ``` #### 3.3.5. Basic features : Load balancing ``` HAProxy offers a fairly complete set of load balancing features, most of which are unfortunately not available in a number of other load balancing products : - no less than 10 load balancing algorithms are supported, some of which apply to input data to offer an infinite list of possibilities. The most common ones are round-robin (for short connections, pick each server in turn), leastconn (for long connections, pick the least recently used of the servers with the lowest connection count), source (for SSL farms or terminal server farms, the server directly depends on the client's source address), URI (for HTTP caches, the server directly depends on the HTTP URI), hdr (the server directly depends on the contents of a specific HTTP header field), first (for short-lived virtual machines, all connections are packed on the smallest possible subset of servers so that unused ones can be powered down); - all algorithms above support per-server weights so that it is possible to accommodate from different server generations in a farm, or direct a small fraction of the traffic to specific servers (debug mode, running the next version of the software, etc); - dynamic weights are supported for round-robin, leastconn and consistent hashing ; this allows server weights to be modified on the fly from the CLI or even by an agent running on the server; - slow-start is supported whenever a dynamic weight is supported; this allows a server to progressively take the traffic. This is an important feature for fragile application servers which require to compile classes at runtime as well as cold caches which need to fill up before being run at full throttle; - hashing can apply to various elements such as client's source address, URL components, query string element, header field values, POST parameter, RDP cookie; - consistent hashing protects server farms against massive redistribution when adding or removing servers in a farm. That's very important in large cache farms and it allows slow-start to be used to refill cold caches; - a number of internal metrics such as the number of connections per server, per backend, the amount of available connection slots in a backend etc makes it possible to build very advanced load balancing strategies. ``` #### 3.3.6. Basic features : Stickiness ``` Application load balancing would be useless without stickiness. HAProxy provides a fairly comprehensive set of possibilities to maintain a visitor on the same server even across various events such as server addition/removal, down/up cycles, and some methods are designed to be resistant to the distance between multiple load balancing nodes in that they don't require any replication : - stickiness information can be individually matched and learned from different places if desired. For example a JSESSIONID cookie may be matched both in a cookie and in the URL. Up to 8 parallel sources can be learned at the same time and each of them may point to a different stick-table; - stickiness information can come from anything that can be seen within a request or response, including source address, TCP payload offset and length, HTTP query string elements, header field values, cookies, and so on. - stick-tables are replicated between all nodes in a multi-master fashion; - commonly used elements such as SSL-ID or RDP cookies (for TSE farms) are directly accessible to ease manipulation; - all sticking rules may be dynamically conditioned by ACLs; - it is possible to decide not to stick to certain servers, such as backup servers, so that when the nominal server comes back, it automatically takes the load back. This is often used in multi-path environments; - in HTTP it is often preferred not to learn anything and instead manipulate a cookie dedicated to stickiness. For this, it's possible to detect, rewrite, insert or prefix such a cookie to let the client remember what server was assigned; - the server may decide to change or clean the stickiness cookie on logout, so that leaving visitors are automatically unbound from the server; - using ACL-based rules it is also possible to selectively ignore or enforce stickiness regardless of the server's state; combined with advanced health checks, that helps admins verify that the server they're installing is up and running before presenting it to the whole world; - an innovative mechanism to set a maximum idle time and duration on cookies ensures that stickiness can be smoothly stopped on devices which are never closed (smartphones, TVs, home appliances) without having to store them on persistent storage; - multiple server entries may share the same stickiness keys so that stickiness is not lost in multi-path environments when one path goes down; - soft-stop ensures that only users with stickiness information will continue to reach the server they've been assigned to but no new users will go there. ``` #### 3.3.7. Basic features : Logging ``` Logging is an extremely important feature for a load balancer, first because a load balancer is often wrongly accused of causing the problems it reveals, and second because it is placed at a critical point in an infrastructure where all normal and abnormal activity needs to be analyzed and correlated with other components. HAProxy provides very detailed logs, with millisecond accuracy and the exact connection accept time that can be searched in firewalls logs (e.g. for NAT correlation). By default, TCP and HTTP logs are quite detailed and contain everything needed for troubleshooting, such as source IP address and port, frontend, backend, server, timers (request receipt duration, queue duration, connection setup time, response headers time, data transfer time), global process state, connection counts, queue status, retries count, detailed stickiness actions and disconnect reasons, header captures with a safe output encoding. It is then possible to extend or replace this format to include any sampled data, variables, captures, resulting in very detailed information. For example it is possible to log the number of cumulative requests or number of different URLs visited by a client. The log level may be adjusted per request using standard ACLs, so it is possible to automatically silent some logs considered as pollution and instead raise warnings when some abnormal behavior happen for a small part of the traffic (e.g. too many URLs or HTTP errors for a source address). Administrative logs are also emitted with their own levels to inform about the loss or recovery of a server for example. Each frontend and backend may use multiple independent log outputs, which eases multi-tenancy. Logs are preferably sent over UDP, maybe JSON-encoded, and are truncated after a configurable line length in order to guarantee delivery. But it is also possible to send them to stdout/stderr or any file descriptor, as well as to a ring buffer that a client can subscribe to in order to retrieve them. ``` #### 3.3.8. Basic features : Statistics ``` HAProxy provides a web-based statistics reporting interface with authentication, security levels and scopes. It is thus possible to provide each hosted customer with his own page showing only his own instances. This page can be located in a hidden URL part of the regular web site so that no new port needs to be opened. This page may also report the availability of other HAProxy nodes so that it is easy to spot if everything works as expected at a glance. The view is synthetic with a lot of details accessible (such as error causes, last access and last change duration, etc), which are also accessible as a CSV table that other tools may import to draw graphs. The page may self-refresh to be used as a monitoring page on a large display. In administration mode, the page also allows to change server state to ease maintenance operations. A Prometheus exporter is also provided so that the statistics can be consumed in a different format depending on the deployment. ``` ### 3.4. Standard features ``` In this section, some features that are very commonly used in HAProxy but are not necessarily present on other load balancers are enumerated. ``` #### 3.4.1. Standard features : Sampling and converting information ``` HAProxy supports information sampling using a wide set of "sample fetch functions". The principle is to extract pieces of information known as samples, for immediate use. This is used for stickiness, to build conditions, to produce information in logs or to enrich HTTP headers. Samples can be fetched from various sources : - constants : integers, strings, IP addresses, binary blocks; - the process : date, environment variables, server/frontend/backend/process state, byte/connection counts/rates, queue length, random generator, ... - variables : per-session, per-request, per-response variables; - the client connection : source and destination addresses and ports, and all related statistics counters; - the SSL client session : protocol, version, algorithm, cipher, key size, session ID, all client and server certificate fields, certificate serial, SNI, ALPN, NPN, client support for certain extensions; - request and response buffers contents : arbitrary payload at offset/length, data length, RDP cookie, decoding of SSL hello type, decoding of TLS SNI; - HTTP (request and response) : method, URI, path, query string arguments, status code, headers values, positional header value, cookies, captures, authentication, body elements; A sample may then pass through a number of operators known as "converters" to experience some transformation. A converter consumes a sample and produces a new one, possibly of a completely different type. For example, a converter may be used to return only the integer length of the input string, or could turn a string to upper case. Any arbitrary number of converters may be applied in series to a sample before final use. Among all available sample converters, the following ones are the most commonly used : - arithmetic and logic operators : they make it possible to perform advanced computation on input data, such as computing ratios, percentages or simply converting from one unit to another one; - IP address masks are useful when some addresses need to be grouped by larger networks; - data representation : URL-decode, base64, hex, JSON strings, hashing; - string conversion : extract substrings at fixed positions, fixed length, extract specific fields around certain delimiters, extract certain words, change case, apply regex-based substitution; - date conversion : convert to HTTP date format, convert local to UTC and conversely, add or remove offset; - lookup an entry in a stick table to find statistics or assigned server; - map-based key-to-value conversion from a file (mostly used for geolocation). ``` #### 3.4.2. Standard features : Maps ``` Maps are a powerful type of converter consisting in loading a two-columns file into memory at boot time, then looking up each input sample from the first column and either returning the corresponding pattern on the second column if the entry was found, or returning a default value. The output information also being a sample, it can in turn experience other transformations including other map lookups. Maps are most commonly used to translate the client's IP address to an AS number or country code since they support a longest match for network addresses but they can be used for various other purposes. Part of their strength comes from being updatable on the fly either from the CLI or from certain actions using other samples, making them capable of storing and retrieving information between subsequent accesses. Another strength comes from the binary tree based indexation which makes them extremely fast even when they contain hundreds of thousands of entries, making geolocation very cheap and easy to set up. ``` #### 3.4.3. Standard features : ACLs and conditions ``` Most operations in HAProxy can be made conditional. Conditions are built by combining multiple ACLs using logic operators (AND, OR, NOT). Each ACL is a series of tests based on the following elements : - a sample fetch method to retrieve the element to test ; - an optional series of converters to transform the element ; - a list of patterns to match against ; - a matching method to indicate how to compare the patterns with the sample For example, the sample may be taken from the HTTP "Host" header, it could then be converted to lower case, then matched against a number of regex patterns using the regex matching method. Technically, ACLs are built on the same core as the maps, they share the exact same internal structure, pattern matching methods and performance. The only real difference is that instead of returning a sample, they only return "found" or or "not found". In terms of usage, ACL patterns may be declared inline in the configuration file and do not require their own file. ACLs may be named for ease of use or to make configurations understandable. A named ACL may be declared multiple times and it will evaluate all definitions in turn until one matches. About 13 different pattern matching methods are provided, among which IP address mask, integer ranges, substrings, regex. They work like functions, and just like with any programming language, only what is needed is evaluated, so when a condition involving an OR is already true, next ones are not evaluated, and similarly when a condition involving an AND is already false, the rest of the condition is not evaluated. There is no practical limit to the number of declared ACLs, and a handful of commonly used ones are provided. However experience has shown that setups using a lot of named ACLs are quite hard to troubleshoot and that sometimes using anonymous ACLs inline is easier as it requires less references out of the scope being analyzed. ``` #### 3.4.4. Standard features : Content switching ``` HAProxy implements a mechanism known as content-based switching. The principle is that a connection or request arrives on a frontend, then the information carried with this request or connection are processed, and at this point it is possible to write ACLs-based conditions making use of these information to decide what backend will process the request. Thus the traffic is directed to one backend or another based on the request's contents. The most common example consists in using the Host header and/or elements from the path (sub-directories or file-name extensions) to decide whether an HTTP request targets a static object or the application, and to route static objects traffic to a backend made of fast and light servers, and all the remaining traffic to a more complex application server, thus constituting a fine-grained virtual hosting solution. This is quite convenient to make multiple technologies coexist as a more global solution. Another use case of content-switching consists in using different load balancing algorithms depending on various criteria. A cache may use a URI hash while an application would use round-robin. Last but not least, it allows multiple customers to use a small share of a common resource by enforcing per-backend (thus per-customer connection limits). Content switching rules scale very well, though their performance may depend on the number and complexity of the ACLs in use. But it is also possible to write dynamic content switching rules where a sample value directly turns into a backend name and without making use of ACLs at all. Such configurations have been reported to work fine at least with 300000 backends in production. ``` #### 3.4.5. Standard features : Stick-tables ``` Stick-tables are commonly used to store stickiness information, that is, to keep a reference to the server a certain visitor was directed to. The key is then the identifier associated with the visitor (its source address, the SSL ID of the connection, an HTTP or RDP cookie, the customer number extracted from the URL or from the payload, ...) and the stored value is then the server's identifier. Stick tables may use 3 different types of samples for their keys : integers, strings and addresses. Only one stick-table may be referenced in a proxy, and it is designated everywhere with the proxy name. Up to 8 keys may be tracked in parallel. The server identifier is committed during request or response processing once both the key and the server are known. Stick-table contents may be replicated in active-active mode with other HAProxy nodes known as "peers" as well as with the new process during a reload operation so that all load balancing nodes share the same information and take the same routing decision if client's requests are spread over multiple nodes. Since stick-tables are indexed on what allows to recognize a client, they are often also used to store extra information such as per-client statistics. The extra statistics take some extra space and need to be explicitly declared. The type of statistics that may be stored includes the input and output bandwidth, the number of concurrent connections, the connection rate and count over a period, the amount and frequency of errors, some specific tags and counters, etc. In order to support keeping such information without being forced to stick to a given server, a special "tracking" feature is implemented and allows to track up to 3 simultaneous keys from different tables at the same time regardless of stickiness rules. Each stored statistics may be searched, dumped and cleared from the CLI and adds to the live troubleshooting capabilities. While this mechanism can be used to surclass a returning visitor or to adjust the delivered quality of service depending on good or bad behavior, it is mostly used to fight against service abuse and more generally DDoS as it allows to build complex models to detect certain bad behaviors at a high processing speed. ``` #### 3.4.6. Standard features : Formatted strings ``` There are many places where HAProxy needs to manipulate character strings, such as logs, redirects, header additions, and so on. In order to provide the greatest flexibility, the notion of Formatted strings was introduced, initially for logging purposes, which explains why it's still called "log-format". These strings contain escape characters allowing to introduce various dynamic data including variables and sample fetch expressions into strings, and even to adjust the encoding while the result is being turned into a string (for example, adding quotes). This provides a powerful way to build header contents, to build response data or even response templates, or to customize log lines. Additionally, in order to remain simple to build most common strings, about 50 special tags are provided as shortcuts for information commonly used in logs. ``` #### 3.4.7. Standard features : HTTP rewriting and redirection ``` Installing a load balancer in front of an application that was never designed for this can be a challenging task without the proper tools. One of the most commonly requested operation in this case is to adjust requests and response headers to make the load balancer appear as the origin server and to fix hard coded information. This comes with changing the path in requests (which is strongly advised against), modifying Host header field, modifying the Location response header field for redirects, modifying the path and domain attribute for cookies, and so on. It also happens that a number of servers are somewhat verbose and tend to leak too much information in the response, making them more vulnerable to targeted attacks. While it's theoretically not the role of a load balancer to clean this up, in practice it's located at the best place in the infrastructure to guarantee that everything is cleaned up. Similarly, sometimes the load balancer will have to intercept some requests and respond with a redirect to a new target URL. While some people tend to confuse redirects and rewriting, these are two completely different concepts, since the rewriting makes the client and the server see different things (and disagree on the location of the page being visited) while redirects ask the client to visit the new URL so that it sees the same location as the server. In order to do this, HAProxy supports various possibilities for rewriting and redirects, among which : - regex-based URL and header rewriting in requests and responses. Regex are the most commonly used tool to modify header values since they're easy to manipulate and well understood; - headers may also be appended, deleted or replaced based on formatted strings so that it is possible to pass information there (e.g. client side TLS algorithm and cipher); - HTTP redirects can use any 3xx code to a relative, absolute, or completely dynamic (formatted string) URI; - HTTP redirects also support some extra options such as setting or clearing a specific cookie, dropping the query string, appending a slash if missing, and so on; - a powerful "return" directive allows to customize every part of a response like status, headers, body using dynamic contents or even template files. - all operations support ACL-based conditions; ``` #### 3.4.8. Standard features : Server protection ``` HAProxy does a lot to maximize service availability, and for this it takes large efforts to protect servers against overloading and attacks. The first and most important point is that only complete and valid requests are forwarded to the servers. The initial reason is that HAProxy needs to find the protocol elements it needs to stay synchronized with the byte stream, and the second reason is that until the request is complete, there is no way to know if some elements will change its semantics. The direct benefit from this is that servers are not exposed to invalid or incomplete requests. This is a very effective protection against slowloris attacks, which have almost no impact on HAProxy. Another important point is that HAProxy contains buffers to store requests and responses, and that by only sending a request to a server when it's complete and by reading the whole response very quickly from the local network, the server side connection is used for a very short time and this preserves server resources as much as possible. A direct extension to this is that HAProxy can artificially limit the number of concurrent connections or outstanding requests to a server, which guarantees that the server will never be overloaded even if it continuously runs at 100% of its capacity during traffic spikes. All excess requests will simply be queued to be processed when one slot is released. In the end, this huge resource savings most often ensures so much better server response times that it ends up actually being faster than by overloading the server. Queued requests may be redispatched to other servers, or even aborted in queue when the client aborts, which also protects the servers against the "reload effect", where each click on "reload" by a visitor on a slow-loading page usually induces a new request and maintains the server in an overloaded state. The slow-start mechanism also protects restarting servers against high traffic levels while they're still finalizing their startup or compiling some classes. Regarding the protocol-level protection, it is possible to relax the HTTP parser to accept non standard-compliant but harmless requests or responses and even to fix them. This allows bogus applications to be accessible while a fix is being developed. In parallel, offending messages are completely captured with a detailed report that help developers spot the issue in the application. The most dangerous protocol violations are properly detected and dealt with and fixed. For example malformed requests or responses with two Content-length headers are either fixed if the values are exactly the same, or rejected if they differ, since it becomes a security problem. Protocol inspection is not limited to HTTP, it is also available for other protocols like TLS or RDP. When a protocol violation or attack is detected, there are various options to respond to the user, such as returning the common "HTTP 400 bad request", closing the connection with a TCP reset, or faking an error after a long delay ("tarpit") to confuse the attacker. All of these contribute to protecting the servers by discouraging the offending client from pursuing an attack that becomes very expensive to maintain. HAProxy also proposes some more advanced options to protect against accidental data leaks and session crossing. Not only it can log suspicious server responses but it will also log and optionally block a response which might affect a given visitors' confidentiality. One such example is a cacheable cookie appearing in a cacheable response and which may result in an intermediary cache to deliver it to another visitor, causing an accidental session sharing. ``` ### 3.5. Advanced features #### 3.5.1. Advanced features : Management ``` HAProxy is designed to remain extremely stable and safe to manage in a regular production environment. It is provided as a single executable file which doesn't require any installation process. Multiple versions can easily coexist, meaning that it's possible (and recommended) to upgrade instances progressively by order of importance instead of migrating all of them at once. Configuration files are easily versioned. Configuration checking is done off-line so it doesn't require to restart a service that will possibly fail. During configuration checks, a number of advanced mistakes may be detected (e.g. a rule hiding another one, or stickiness that will not work) and detailed warnings and configuration hints are proposed to fix them. Backwards configuration file compatibility goes very far away in time, with version 1.5 still fully supporting configurations for versions 1.1 written 13 years before, and 1.6 only dropping support for almost unused, obsolete keywords that can be done differently. The configuration and software upgrade mechanism is smooth and non disruptive in that it allows old and new processes to coexist on the system, each handling its own connections. System status, build options, and library compatibility are reported on startup. Some advanced features allow an application administrator to smoothly stop a server, detect when there's no activity on it anymore, then take it off-line, stop it, upgrade it and ensure it doesn't take any traffic while being upgraded, then test it again through the normal path without opening it to the public, and all of this without touching HAProxy at all. This ensures that even complicated production operations may be done during opening hours with all technical resources available. The process tries to save resources as much as possible, uses memory pools to save on allocation time and limit memory fragmentation, releases payload buffers as soon as their contents are sent, and supports enforcing strong memory limits above which connections have to wait for a buffer to become available instead of allocating more memory. This system helps guarantee memory usage in certain strict environments. A command line interface (CLI) is available as a UNIX or TCP socket, to perform a number of operations and to retrieve troubleshooting information. Everything done on this socket doesn't require a configuration change, so it is mostly used for temporary changes. Using this interface it is possible to change a server's address, weight and status, to consult statistics and clear counters, dump and clear stickiness tables, possibly selectively by key criteria, dump and kill client-side and server-side connections, dump captured errors with a detailed analysis of the exact cause and location of the error, dump, add and remove entries from ACLs and maps, update TLS shared secrets, apply connection limits and rate limits on the fly to arbitrary frontends (useful in shared hosting environments), and disable a specific frontend to release a listening port (useful when daytime operations are forbidden and a fix is needed nonetheless). Updating certificates and their configuration on the fly is permitted, as well as enabling and consulting traces of every processing step of the traffic. For environments where SNMP is mandatory, at least two agents exist, one is provided with the HAProxy sources and relies on the Net-SNMP Perl module. Another one is provided with the commercial packages and doesn't require Perl. Both are roughly equivalent in terms of coverage. It is often recommended to install 4 utilities on the machine where HAProxy is deployed : - socat (in order to connect to the CLI, though certain forks of netcat can also do it to some extents); - halog from the latest HAProxy version : this is the log analysis tool, it parses native TCP and HTTP logs extremely fast (1 to 2 GB per second) and extracts useful information and statistics such as requests per URL, per source address, URLs sorted by response time or error rate, termination codes etc. It was designed to be deployed on the production servers to help troubleshoot live issues so it has to be there ready to be used; - tcpdump : this is highly recommended to take the network traces needed to troubleshoot an issue that was made visible in the logs. There is a moment where application and haproxy's analysis will diverge and the network traces are the only way to say who's right and who's wrong. It's also fairly common to detect bugs in network stacks and hypervisors thanks to tcpdump; - strace : it is tcpdump's companion. It will report what HAProxy really sees and will help sort out the issues the operating system is responsible for from the ones HAProxy is responsible for. Strace is often requested when a bug in HAProxy is suspected; ``` #### 3.5.2. Advanced features : System-specific capabilities ``` Depending on the operating system HAProxy is deployed on, certain extra features may be available or needed. While it is supported on a number of platforms, HAProxy is primarily developed on Linux, which explains why some features are only available on this platform. The transparent bind and connect features, the support for binding connections to a specific network interface, as well as the ability to bind multiple processes to the same IP address and ports are only available on Linux and BSD systems, though only Linux performs a kernel-side load balancing of the incoming requests between the available processes. On Linux, there are also a number of extra features and optimizations including support for network namespaces (also known as "containers") allowing HAProxy to be a gateway between all containers, the ability to set the MSS, Netfilter marks and IP TOS field on the client side connection, support for TCP FastOpen on the listening side, TCP user timeouts to let the kernel quickly kill connections when it detects the client has disappeared before the configured timeouts, TCP splicing to let the kernel forward data between the two sides of a connections thus avoiding multiple memory copies, the ability to enable the "defer-accept" bind option to only get notified of an incoming connection once data become available in the kernel buffers, and the ability to send the request with the ACK confirming a connect (sometimes called "piggy-back") which is enabled with the "tcp-smart-connect" option. On Linux, HAProxy also takes great care of manipulating the TCP delayed ACKs to save as many packets as possible on the network. Some systems have an unreliable clock which jumps back and forth in the past and in the future. This used to happen with some NUMA systems where multiple processors didn't see the exact same time of day, and recently it became more common in virtualized environments where the virtual clock has no relation with the real clock, resulting in huge time jumps (sometimes up to 30 seconds have been observed). This causes a lot of trouble with respect to timeout enforcement in general. Due to this flaw of these systems, HAProxy maintains its own monotonic clock which is based on the system's clock but where drift is measured and compensated for. This ensures that even with a very bad system clock, timers remain reasonably accurate and timeouts continue to work. Note that this problem affects all the software running on such systems and is not specific to HAProxy. The common effects are spurious timeouts or application freezes. Thus if this behavior is detected on a system, it must be fixed, regardless of the fact that HAProxy protects itself against it. On Linux, a new starting process may communicate with the previous one to reuse its listening file descriptors so that the listening sockets are never interrupted during the process's replacement. ``` #### 3.5.3. Advanced features : Scripting ``` HAProxy can be built with support for the Lua embedded language, which opens a wide area of new possibilities related to complex manipulation of requests or responses, routing decisions, statistics processing and so on. Using Lua it is even possible to establish parallel connections to other servers to exchange information. This way it becomes possible (though complex) to develop an authentication system for example. Please refer to the documentation in the file "doc/lua-api/index.rst" for more information on how to use Lua. ``` #### 3.5.4. Advanced features: Tracing ``` At any moment an administrator may connect over the CLI and enable tracing in various internal subsystems. Various levels of details are provided by default so that in practice anything between one line per request to 500 lines per request can be retrieved. Filters as well as an automatic capture on/off/pause mechanism are available so that it really is possible to wait for a certain event and watch it in detail. This is extremely convenient to diagnose protocol violations from faulty servers and clients, or denial of service attacks. ``` ### 3.6. Sizing ``` Typical CPU usage figures show 15% of the processing time spent in HAProxy versus 85% in the kernel in TCP or HTTP close mode, and about 30% for HAProxy versus 70% for the kernel in HTTP keep-alive mode. This means that the operating system and its tuning have a strong impact on the global performance. Usages vary a lot between users, some focus on bandwidth, other ones on request rate, others on connection concurrency, others on SSL performance. This section aims at providing a few elements to help with this task. It is important to keep in mind that every operation comes with a cost, so each individual operation adds its overhead on top of the other ones, which may be negligible in certain circumstances, and which may dominate in other cases. When processing the requests from a connection, we can say that : - forwarding data costs less than parsing request or response headers; - parsing request or response headers cost less than establishing then closing a connection to a server; - establishing an closing a connection costs less than a TLS resume operation; - a TLS resume operation costs less than a full TLS handshake with a key computation; - an idle connection costs less CPU than a connection whose buffers hold data; - a TLS context costs even more memory than a connection with data; So in practice, it is cheaper to process payload bytes than header bytes, thus it is easier to achieve high network bandwidth with large objects (few requests per volume unit) than with small objects (many requests per volume unit). This explains why maximum bandwidth is always measured with large objects, while request rate or connection rates are measured with small objects. Some operations scale well on multiple processes spread over multiple CPUs, and others don't scale as well. Network bandwidth doesn't scale very far because the CPU is rarely the bottleneck for large objects, it's mostly the network bandwidth and data buses to reach the network interfaces. The connection rate doesn't scale well over multiple processors due to a few locks in the system when dealing with the local ports table. The request rate over persistent connections scales very well as it doesn't involve much memory nor network bandwidth and doesn't require to access locked structures. TLS key computation scales very well as it's totally CPU-bound. TLS resume scales moderately well, but reaches its limits around 4 processes where the overhead of accessing the shared table offsets the small gains expected from more power. The performance numbers one can expect from a very well tuned system are in the following range. It is important to take them as orders of magnitude and to expect significant variations in any direction based on the processor, IRQ setting, memory type, network interface type, operating system tuning and so on. The following numbers were found on a Core i7 running at 3.7 GHz equipped with a dual-port 10 Gbps NICs running Linux kernel 3.10, HAProxy 1.6 and OpenSSL 1.0.2. HAProxy was running as a single process on a single dedicated CPU core, and two extra cores were dedicated to network interrupts : - 20 Gbps of maximum network bandwidth in clear text for objects 256 kB or higher, 10 Gbps for 41kB or higher; - 4.6 Gbps of TLS traffic using AES256-GCM cipher with large objects; - 83000 TCP connections per second from client to server; - 82000 HTTP connections per second from client to server; - 97000 HTTP requests per second in server-close mode (keep-alive with the client, close with the server); - 243000 HTTP requests per second in end-to-end keep-alive mode; - 300000 filtered TCP connections per second (anti-DDoS) - 160000 HTTPS requests per second in keep-alive mode over persistent TLS connections; - 13100 HTTPS requests per second using TLS resumed connections; - 1300 HTTPS connections per second using TLS connections renegotiated with RSA2048; - 20000 concurrent saturated connections per GB of RAM, including the memory required for system buffers; it is possible to do better with careful tuning but this result it easy to achieve. - about 8000 concurrent TLS connections (client-side only) per GB of RAM, including the memory required for system buffers; - about 5000 concurrent end-to-end TLS connections (both sides) per GB of RAM including the memory required for system buffers; A more recent benchmark featuring the multi-thread enabled HAProxy 2.4 on a 64-core ARM Graviton2 processor in AWS reached 2 million HTTPS requests per second at sub-millisecond response time, and 100 Gbps of traffic: https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance/ Thus a good rule of thumb to keep in mind is that the request rate is divided by 10 between TLS keep-alive and TLS resume, and between TLS resume and TLS renegotiation, while it's only divided by 3 between HTTP keep-alive and HTTP close. Another good rule of thumb is to remember that a high frequency core with AES instructions can do around 20 Gbps of AES-GCM per core. Another good rule of thumb is to consider that on the same server, HAProxy will be able to saturate : - about 5-10 static file servers or caching proxies; - about 100 anti-virus proxies; - and about 100-1000 application servers depending on the technology in use. ``` ### 3.7. How to get HAProxy ``` HAProxy is an open source project covered by the GPLv2 license, meaning that everyone is allowed to redistribute it provided that access to the sources is also provided upon request, especially if any modifications were made. HAProxy evolves as a main development branch called "master" or "mainline", from which new branches are derived once the code is considered stable. A lot of web sites run some development branches in production on a voluntarily basis, either to participate to the project or because they need a bleeding edge feature, and their feedback is highly valuable to fix bugs and judge the overall quality and stability of the version being developed. The new branches that are created when the code is stable enough constitute a stable version and are generally maintained for several years, so that there is no emergency to migrate to a newer branch even when you're not on the latest. Once a stable branch is issued, it may only receive bug fixes, and very rarely minor feature updates when that makes users' life easier. All fixes that go into a stable branch necessarily come from the master branch. This guarantees that no fix will be lost after an upgrade. For this reason, if you fix a bug, please make the patch against the master branch, not the stable branch. You may even discover it was already fixed. This process also ensures that regressions in a stable branch are extremely rare, so there is never any excuse for not upgrading to the latest version in your current branch. Branches are numbered with two digits delimited with a dot, such as "1.6". Since 1.9, branches with an odd second digit are mostly focused on sensitive technical updates and more aimed at advanced users because they are likely to trigger more bugs than the other ones. They are maintained for about a year only and must not be deployed where they cannot be rolled back in emergency. A complete version includes one or two sub-version numbers indicating the level of fix. For example, version 1.5.14 is the 14th fix release in branch 1.5 after version 1.5.0 was issued. It contains 126 fixes for individual bugs, 24 updates on the documentation, and 75 other backported patches, most of which were needed to fix the aforementioned 126 bugs. An existing feature may never be modified nor removed in a stable branch, in order to guarantee that upgrades within the same branch will always be harmless. HAProxy is available from multiple sources, at different release rhythms : - The official community web site : http://www.haproxy.org/ : this site provides the sources of the latest development release, all stable releases, as well as nightly snapshots for each branch. The release cycle is not fast, several months between stable releases, or between development snapshots. Very old versions are still supported there. Everything is provided as sources only, so whatever comes from there needs to be rebuilt and/or repackaged; - GitHub : https://github.com/haproxy/haproxy/ : this is the mirror for the development branch only, which provides integration with the issue tracker, continuous integration and code coverage tools. This is exclusively for contributors; - A number of operating systems such as Linux distributions and BSD ports. These systems generally provide long-term maintained versions which do not always contain all the fixes from the official ones, but which at least contain the critical fixes. It often is a good option for most users who do not seek advanced configurations and just want to keep updates easy; - Commercial versions from http://www.haproxy.com/ : these are supported professional packages built for various operating systems or provided as appliances, based on the latest stable versions and including a number of features backported from the next release for which there is a strong demand. It is the best option for users seeking the latest features with the reliability of a stable branch, the fastest response time to fix bugs, or simply support contracts on top of an open source product; In order to ensure that the version you're using is the latest one in your branch, you need to proceed this way : - verify which HAProxy executable you're running : some systems ship it by default and administrators install their versions somewhere else on the system, so it is important to verify in the startup scripts which one is used; - determine which source your HAProxy version comes from. For this, it's generally sufficient to type "haproxy -v". A development version will appear like this, with the "dev" word after the branch number : HAProxy version 2.4-dev18-a5357c-137 2021/05/09 - https://haproxy.org/ A stable version will appear like this, as well as unmodified stable versions provided by operating system vendors : HAProxy version 1.5.14 2015/07/02 And a nightly snapshot of a stable version will appear like this with an hexadecimal sequence after the version, and with the date of the snapshot instead of the date of the release : HAProxy version 1.5.14-e4766ba 2015/07/29 Any other format may indicate a system-specific package with its own patch set. For example HAProxy Enterprise versions will appear with the following format (<branch>-<latest commit>-<revision>) : HAProxy version 1.5.0-994126-357 2015/07/02 Please note that historically versions prior to 2.4 used to report the process name with a hyphen between "HA" and "Proxy", including those above which were adjusted to show the correct format only, so better ignore this word or use a relaxed match in scripts. Additionally, modern versions add a URL linking to the project's home. Finally, versions 2.1 and above will include a "Status" line indicating whether the version is safe for production or not, and if so, till when, as well as a link to the list of known bugs affecting this version. - for system-specific packages, you have to check with your vendor's package repository or update system to ensure that your system is still supported, and that fixes are still provided for your branch. For community versions coming from haproxy.org, just visit the site, verify the status of your branch and compare the latest version with yours to see if you're on the latest one. If not you can upgrade. If your branch is not maintained anymore, you're definitely very late and will have to consider an upgrade to a more recent branch (carefully read the README when doing so). HAProxy will have to be updated according to the source it came from. Usually it follows the system vendor's way of upgrading a package. If it was taken from sources, please read the README file in the sources directory after extracting the sources and follow the instructions for your operating system. ``` 4. Companion products and alternatives --------------------------------------- ``` HAProxy integrates fairly well with certain products listed below, which is why they are mentioned here even if not directly related to HAProxy. ``` ### 4.1. Apache HTTP server ``` Apache is the de-facto standard HTTP server. It's a very complete and modular project supporting both file serving and dynamic contents. It can serve as a frontend for some application servers. It can even proxy requests and cache responses. In all of these use cases, a front load balancer is commonly needed. Apache can work in various modes, some being heavier than others. Certain modules still require the heavier pre-forked model and will prevent Apache from scaling well with a high number of connections. In this case HAProxy can provide a tremendous help by enforcing the per-server connection limits to a safe value and will significantly speed up the server and preserve its resources that will be better used by the application. Apache can extract the client's address from the X-Forwarded-For header by using the "mod_rpaf" extension. HAProxy will automatically feed this header when "option forwardfor" is specified in its configuration. HAProxy may also offer a nice protection to Apache when exposed to the internet, where it will better resist a wide number of types of DoS attacks. ``` ### 4.2. NGINX ``` NGINX is the second de-facto standard HTTP server. Just like Apache, it covers a wide range of features. NGINX is built on a similar model as HAProxy so it has no problem dealing with tens of thousands of concurrent connections. When used as a gateway to some applications (e.g. using the included PHP FPM) it can often be beneficial to set up some frontend connection limiting to reduce the load on the PHP application. HAProxy will clearly be useful there both as a regular load balancer and as the traffic regulator to speed up PHP by decongesting it. Also since both products use very little CPU thanks to their event-driven architecture, it's often easy to install both of them on the same system. NGINX implements HAProxy's PROXY protocol, thus it is easy for HAProxy to pass the client's connection information to NGINX so that the application gets all the relevant information. Some benchmarks have also shown that for large static file serving, implementing consistent hash on HAProxy in front of NGINX can be beneficial by optimizing the OS' cache hit ratio, which is basically multiplied by the number of server nodes. ``` ### 4.3. Varnish ``` Varnish is a smart caching reverse-proxy, probably best described as a web application accelerator. Varnish doesn't implement SSL/TLS and wants to dedicate all of its CPU cycles to what it does best. Varnish also implements HAProxy's PROXY protocol so that HAProxy can very easily be deployed in front of Varnish as an SSL offloader as well as a load balancer and pass it all relevant client information. Also, Varnish naturally supports decompression from the cache when a server has provided a compressed object, but doesn't compress however. HAProxy can then be used to compress outgoing data when backend servers do not implement compression, though it's rarely a good idea to compress on the load balancer unless the traffic is low. When building large caching farms across multiple nodes, HAProxy can make use of consistent URL hashing to intelligently distribute the load to the caching nodes and avoid cache duplication, resulting in a total cache size which is the sum of all caching nodes. In addition, caching of very small dumb objects for a short duration on HAProxy can sometimes save network round trips and reduce the CPU load on both the HAProxy and the Varnish nodes. This is only possible is no processing is done on these objects on Varnish (this is often referred to as the notion of "favicon cache", by which a sizeable percentage of useless downstream requests can sometimes be avoided). However do not enable HAProxy caching for a long time (more than a few seconds) in front of any other cache, that would significantly complicate troubleshooting without providing really significant savings. ``` ### 4.4. Alternatives ``` Linux Virtual Server (LVS or IPVS) is the layer 4 load balancer included within the Linux kernel. It works at the packet level and handles TCP and UDP. In most cases it's more a complement than an alternative since it doesn't have layer 7 knowledge at all. Pound is another well-known load balancer. It's much simpler and has much less features than HAProxy but for many very basic setups both can be used. Its author has always focused on code auditability first and wants to maintain the set of features low. Its thread-based architecture scales less well with high connection counts, but it's a good product. Pen is a quite light load balancer. It supports SSL, maintains persistence using a fixed-size table of its clients' IP addresses. It supports a packet-oriented mode allowing it to support direct server return and UDP to some extents. It is meant for small loads (the persistence table only has 2048 entries). NGINX can do some load balancing to some extents, though it's clearly not its primary function. Production traffic is used to detect server failures, the load balancing algorithms are more limited, and the stickiness is very limited. But it can make sense in some simple deployment scenarios where it is already present. The good thing is that since it integrates very well with HAProxy, there's nothing wrong with adding HAProxy later when its limits have been reached. Varnish also does some load balancing of its backend servers and does support real health checks. It doesn't implement stickiness however, so just like with NGINX, as long as stickiness is not needed that can be enough to start with. And similarly, since HAProxy and Varnish integrate so well together, it's easy to add it later into the mix to complement the feature set. ``` 5. Contacts ------------ ``` If you want to contact the developers or any community member about anything, the best way to do it usually is via the mailing list by sending your message to [email protected]. Please note that this list is public and its archives are public as well so you should avoid disclosing sensitive information. A thousand of users of various experience levels are present there and even the most complex questions usually find an optimal response relatively quickly. Suggestions are welcome too. For users having difficulties with e-mail, a Discourse platform is available at http://discourse.haproxy.org/ . However please keep in mind that there are less people reading questions there and that most are handled by a really tiny team. In any case, please be patient and respectful with those who devote their spare time helping others. I you believe you've found a bug but are not sure, it's best reported on the mailing list. If you're quite convinced you've found a bug, that your version is up-to-date in its branch, and you already have a GitHub account, feel free to go directly to https://github.com/haproxy/haproxy/ and file an issue with all possibly available details. Again, this is public so be careful not to post information you might later regret. Since the issue tracker presents itself as a very long thread, please avoid pasting very long dumps (a few hundreds lines or more) and attach them instead. If you've found what you're absolutely certain can be considered a critical security issue that would put many users in serious trouble if discussed in a public place, then you can send it with the reproducer to [email protected]. A small team of trusted developers will receive it and will be able to propose a fix. We usually don't use embargoes and once a fix is available it gets merged. In some rare circumstances it can happen that a release is coordinated with software vendors. Please note that this process usually messes up with eveyone's work, and that rushed up releases can sometimes introduce new bugs, so it's best avoided unless strictly necessary; as such, there is often little consideration for reports that needlessly cause such extra burden, and the best way to see your work credited usually is to provide a working fix, which will appear in changelogs. ```
programming_docs
haproxy Configuration Manual Configuration Manual ==================== ``` This document covers the configuration language as implemented in the version specified above. It does not provide any hints, examples, or advice. For such documentation, please refer to the Reference Manual or the Architecture Manual. The summary below is meant to help you find sections by name and navigate through the document. Note to documentation contributors : This document is formatted with 80 columns per line, with even number of spaces for indentation and without tabs. Please follow these rules strictly so that it remains easily printable everywhere. If a line needs to be printed verbatim and does not fit, please end each line with a backslash ('\') and continue on next line, indented by two characters. It is also sometimes useful to prefix all output lines (logs, console outputs) with 3 closing angle brackets ('>>>') in order to emphasize the difference between inputs and outputs when they may be ambiguous. If you add sections, please update the summary below for easier searching. ``` 1. Quick reminder about HTTP ----------------------------- ``` When HAProxy is running in HTTP mode, both the request and the response are fully analyzed and indexed, thus it becomes possible to build matching criteria on almost anything found in the contents. However, it is important to understand how HTTP requests and responses are formed, and how HAProxy decomposes them. It will then become easier to write correct rules and to debug existing configurations. ``` ### 1.1. The HTTP transaction model ``` The HTTP protocol is transaction-driven. This means that each request will lead to one and only one response. Traditionally, a TCP connection is established from the client to the server, a request is sent by the client through the connection, the server responds, and the connection is closed. A new request will involve a new connection : [CON1] [REQ1] ... [RESP1] [CLO1] [CON2] [REQ2] ... [RESP2] [CLO2] ... In this mode, called the "HTTP close" mode, there are as many connection establishments as there are HTTP transactions. Since the connection is closed by the server after the response, the client does not need to know the content length. Due to the transactional nature of the protocol, it was possible to improve it to avoid closing a connection between two subsequent transactions. In this mode however, it is mandatory that the server indicates the content length for each response so that the client does not wait indefinitely. For this, a special header is used: "Content-length". This mode is called the "keep-alive" mode : [CON] [REQ1] ... [RESP1] [REQ2] ... [RESP2] [CLO] ... Its advantages are a reduced latency between transactions, and less processing power required on the server side. It is generally better than the close mode, but not always because the clients often limit their concurrent connections to a smaller value. Another improvement in the communications is the pipelining mode. It still uses keep-alive, but the client does not wait for the first response to send the second request. This is useful for fetching large number of images composing a page : [CON] [REQ1] [REQ2] ... [RESP1] [RESP2] [CLO] ... This can obviously have a tremendous benefit on performance because the network latency is eliminated between subsequent requests. Many HTTP agents do not correctly support pipelining since there is no way to associate a response with the corresponding request in HTTP. For this reason, it is mandatory for the server to reply in the exact same order as the requests were received. The next improvement is the multiplexed mode, as implemented in HTTP/2 and HTTP/3. This time, each transaction is assigned a single stream identifier, and all streams are multiplexed over an existing connection. Many requests can be sent in parallel by the client, and responses can arrive in any order since they also carry the stream identifier. HTTP/3 is implemented over QUIC, itself implemented over UDP. QUIC solves the head of line blocking at transport level by means of independently treated streams. Indeed, when experiencing loss, an impacted stream does not affect the other streams. By default HAProxy operates in keep-alive mode with regards to persistent connections: for each connection it processes each request and response, and leaves the connection idle on both sides between the end of a response and the start of a new request. When it receives HTTP/2 connections from a client, it processes all the requests in parallel and leaves the connection idling, waiting for new requests, just as if it was a keep-alive HTTP connection. HAProxy supports 4 connection modes : - keep alive : all requests and responses are processed (default) - tunnel : only the first request and response are processed, everything else is forwarded with no analysis (deprecated). - server close : the server-facing connection is closed after the response. - close : the connection is actively closed after end of response. ``` ### 1.2. HTTP request ``` First, let's consider this HTTP request : Line Contents number 1 GET /serv/login.php?lang=en&profile=2 HTTP/1.1 2 Host: www.mydomain.com 3 User-agent: my small browser 4 Accept: image/jpeg, image/gif 5 Accept: image/png ``` #### 1.2.1. The Request line ``` Line 1 is the "request line". It is always composed of 3 fields : - a METHOD : GET - a URI : /serv/login.php?lang=en&profile=2 - a version tag : HTTP/1.1 All of them are delimited by what the standard calls LWS (linear white spaces), which are commonly spaces, but can also be tabs or line feeds/carriage returns followed by spaces/tabs. The method itself cannot contain any colon (':') and is limited to alphabetic letters. All those various combinations make it desirable that HAProxy performs the splitting itself rather than leaving it to the user to write a complex or inaccurate regular expression. The URI itself can have several forms : - A "relative URI" : /serv/login.php?lang=en&profile=2 It is a complete URL without the host part. This is generally what is received by servers, reverse proxies and transparent proxies. - An "absolute URI", also called a "URL" : http://192.168.0.12:8080/serv/login.php?lang=en&profile=2 It is composed of a "scheme" (the protocol name followed by '://'), a host name or address, optionally a colon (':') followed by a port number, then a relative URI beginning at the first slash ('/') after the address part. This is generally what proxies receive, but a server supporting HTTP/1.1 must accept this form too. - a star ('*') : this form is only accepted in association with the OPTIONS method and is not relayable. It is used to inquiry a next hop's capabilities. - an address:port combination : 192.168.0.12:80 This is used with the CONNECT method, which is used to establish TCP tunnels through HTTP proxies, generally for HTTPS, but sometimes for other protocols too. In a relative URI, two sub-parts are identified. The part before the question mark is called the "[path](#path)". It is typically the relative path to static objects on the server. The part after the question mark is called the "query string". It is mostly used with GET requests sent to dynamic scripts and is very specific to the language, framework or application in use. HTTP/2 doesn't convey a version information with the request, so the version is assumed to be the same as the one of the underlying protocol (i.e. "HTTP/2"). ``` #### 1.2.2. The request headers ``` The headers start at the second line. They are composed of a name at the beginning of the line, immediately followed by a colon (':'). Traditionally, an LWS is added after the colon but that's not required. Then come the values. Multiple identical headers may be folded into one single line, delimiting the values with commas, provided that their order is respected. This is commonly encountered in the "Cookie:" field. A header may span over multiple lines if the subsequent lines begin with an LWS. In the example in 1.2, lines 4 and 5 define a total of 3 values for the "Accept:" header. Contrary to a common misconception, header names are not case-sensitive, and their values are not either if they refer to other header names (such as the "Connection:" header). In HTTP/2, header names are always sent in lower case, as can be seen when running in debug mode. Internally, all header names are normalized to lower case so that HTTP/1.x and HTTP/2 use the exact same representation, and they are sent as-is on the other side. This explains why an HTTP/1.x request typed with camel case is delivered in lower case. The end of the headers is indicated by the first empty line. People often say that it's a double line feed, which is not exact, even if a double line feed is one valid form of empty line. Fortunately, HAProxy takes care of all these complex combinations when indexing headers, checking values and counting them, so there is no reason to worry about the way they could be written, but it is important not to accuse an application of being buggy if it does unusual, valid things. Important note: As suggested by RFC7231, HAProxy normalizes headers by replacing line breaks in the middle of headers by LWS in order to join multi-line headers. This is necessary for proper analysis and helps less capable HTTP parsers to work correctly and not to be fooled by such complex constructs. ``` ### 1.3. HTTP response ``` An HTTP response looks very much like an HTTP request. Both are called HTTP messages. Let's consider this HTTP response : Line Contents number 1 HTTP/1.1 200 OK 2 Content-length: 350 3 Content-Type: text/html As a special case, HTTP supports so called "Informational responses" as status codes 1xx. These messages are special in that they don't convey any part of the response, they're just used as sort of a signaling message to ask a client to continue to post its request for instance. In the case of a status 100 response the requested information will be carried by the next non-100 response message following the informational one. This implies that multiple responses may be sent to a single request, and that this only works when keep-alive is enabled (1xx messages are HTTP/1.1 only). HAProxy handles these messages and is able to correctly forward and skip them, and only process the next non-100 response. As such, these messages are neither logged nor transformed, unless explicitly state otherwise. Status 101 messages indicate that the protocol is changing over the same connection and that HAProxy must switch to tunnel mode, just as if a CONNECT had occurred. Then the Upgrade header would contain additional information about the type of protocol the connection is switching to. ``` #### 1.3.1. The response line ``` Line 1 is the "response line". It is always composed of 3 fields : - a version tag : HTTP/1.1 - a status code : 200 - a reason : OK The status code is always 3-digit. The first digit indicates a general status : - 1xx = informational message to be skipped (e.g. 100, 101) - 2xx = OK, content is following (e.g. 200, 206) - 3xx = OK, no content following (e.g. 302, 304) - 4xx = error caused by the client (e.g. 401, 403, 404) - 5xx = error caused by the server (e.g. 500, 502, 503) Please refer to RFC7231 for the detailed meaning of all such codes. The "reason" field is just a hint, but is not parsed by clients. Anything can be found there, but it's a common practice to respect the well-established messages. It can be composed of one or multiple words, such as "OK", "Found", or "Authentication Required". HAProxy may emit the following status codes by itself : Code When / reason 200 access to stats page, and when replying to monitoring requests 301 when performing a redirection, depending on the configured code 302 when performing a redirection, depending on the configured code 303 when performing a redirection, depending on the configured code 307 when performing a redirection, depending on the configured code 308 when performing a redirection, depending on the configured code 400 for an invalid or too large request 401 when an authentication is required to perform the action (when accessing the stats page) 403 when a request is forbidden by a "[http-request deny](#http-request%20deny)" rule 404 when the requested resource could not be found 408 when the request timeout strikes before the request is complete 410 when the requested resource is no longer available and will not be available again 500 when HAProxy encounters an unrecoverable internal error, such as a memory allocation failure, which should never happen 501 when HAProxy is unable to satisfy a client request because of an unsupported feature 502 when the server returns an empty, invalid or incomplete response, or when an "[http-response deny](#http-response%20deny)" rule blocks the response. 503 when no server was available to handle the request, or in response to monitoring requests which match the "[monitor fail](#monitor%20fail)" condition 504 when the response timeout strikes before the server responds The error 4xx and 5xx codes above may be customized (see "[errorloc](#errorloc)" in section 4.2). ``` #### 1.3.2. The response headers ``` Response headers work exactly like request headers, and as such, HAProxy uses the same parsing function for both. Please refer to paragraph 1.2.2 for more details. ``` 2. Configuring HAProxy ----------------------- ### 2.1. Configuration file format ``` HAProxy's configuration process involves 3 major sources of parameters : - the arguments from the command-line, which always take precedence - the configuration file(s), whose format is described here - the running process's environment, in case some environment variables are explicitly referenced The configuration file follows a fairly simple hierarchical format which obey a few basic rules: 1. a configuration file is an ordered sequence of statements 2. a statement is a single non-empty line before any unprotected "#" (hash) 3. a line is a series of tokens or "words" delimited by unprotected spaces or tab characters 4. the first word or sequence of words of a line is one of the keywords or keyword sequences listed in this document 5. all other words are all arguments of the first one, some being well-known keywords listed in this document, others being values, references to other parts of the configuration, or expressions 6. certain keywords delimit a section inside which only a subset of keywords are supported 7. a section ends at the end of a file or on a special keyword starting a new section This is all that is needed to know to write a simple but reliable configuration generator, but this is not enough to reliably parse any configuration nor to figure how to deal with certain corner cases. First, there are a few consequences of the rules above. Rule 6 and 7 imply that the keywords used to define a new section are valid everywhere and cannot have a different meaning in a specific section. These keywords are always a single word (as opposed to a sequence of words), and traditionally the section that follows them is designated using the same name. For example when speaking about the "global section", it designates the section of configuration that follows the "global" keyword. This usage is used a lot in error messages to help locate the parts that need to be addressed. A number of sections create an internal object or configuration space, which requires to be distinguished from other ones. In this case they will take an extra word which will set the name of this particular section. For some of them the section name is mandatory. For example "frontend foo" will create a new section of type "frontend" named "foo". Usually a name is specific to its section and two sections of different types may use the same name, but this is not recommended as it tends to complexify configuration management. A direct consequence of rule 7 is that when multiple files are read at once, each of them must start with a new section, and the end of each file will end a section. A file cannot contain sub-sections nor end an existing section and start a new one. Rule 1 mentioned that ordering matters. Indeed, some keywords create directives that can be repeated multiple times to create ordered sequences of rules to be applied in a certain order. For example "[tcp-request](#tcp-request)" can be used to alternate "accept" and "reject" rules on varying criteria. As such, a configuration file processor must always preserve a section's ordering when editing a file. The ordering of sections usually does not matter except for the global section which must be placed before other sections, but it may be repeated if needed. In addition, some automatic identifiers may automatically be assigned to some of the created objects (e.g. proxies), and by reordering sections, their identifiers will change. These ones appear in the statistics for example. As such, the configuration below will assign "foo" ID number 1 and "bar" ID number 2, which will be swapped if the two sections are reversed: listen foo bind :80 listen bar bind :81 Another important point is that according to rules 2 and 3 above, empty lines, spaces, tabs, and comments following and unprotected "#" character are not part of the configuration as they are just used as delimiters. This implies that the following configurations are strictly equivalent: global#this is the global section daemon#daemonize frontend foo mode http # or tcp and: global daemon # this is the public web frontend frontend foo mode http The common practice is to align to the left only the keyword that initiates a new section, and indent (i.e. prepend a tab character or a few spaces) all other keywords so that it's instantly visible that they belong to the same section (as done in the second example above). Placing comments before a new section helps the reader decide if it's the desired one. Leaving a blank line at the end of a section also visually helps spotting the end when editing it. Tabs are very convenient for indent but they do not copy-paste well. If spaces are used instead, it is recommended to avoid placing too many (2 to 4) so that editing in field doesn't become a burden with limited editors that do not support automatic indent. In the early days it used to be common to see arguments split at fixed tab positions because most keywords would not take more than two arguments. With modern versions featuring complex expressions this practice does not stand anymore, and is not recommended. ``` ### 2.2. Quoting and escaping ``` In modern configurations, some arguments require the use of some characters that were previously considered as pure delimiters. In order to make this possible, HAProxy supports character escaping by prepending a backslash ('\') in front of the character to be escaped, weak quoting within double quotes ('"') and strong quoting within single quotes ("'"). This is pretty similar to what is done in a number of programming languages and very close to what is commonly encountered in Bourne shell. The principle is the following: while the configuration parser cuts the lines into words, it also takes care of quotes and backslashes to decide whether a character is a delimiter or is the raw representation of this character within the current word. The escape character is then removed, the quotes are removed, and the remaining word is used as-is as a keyword or argument for example. If a backslash is needed in a word, it must either be escaped using itself (i.e. double backslash) or be strongly quoted. Escaping outside quotes is achieved by preceding a special character by a backslash ('\'): \ to mark a space and differentiate it from a delimiter \# to mark a hash and differentiate it from a comment \\ to use a backslash \' to use a single quote and differentiate it from strong quoting \" to use a double quote and differentiate it from weak quoting In addition, a few non-printable characters may be emitted using their usual C-language representation: \n to insert a line feed (LF, character \x0a or ASCII 10 decimal) \r to insert a carriage return (CR, character \x0d or ASCII 13 decimal) \t to insert a tab (character \x09 or ASCII 9 decimal) \xNN to insert character having ASCII code hex NN (e.g \x0a for LF). Weak quoting is achieved by surrounding double quotes ("") around the character or sequence of characters to protect. Weak quoting prevents the interpretation of: space or tab as a word separator ' single quote as a strong quoting delimiter # hash as a comment start Weak quoting permits the interpretation of environment variables (which are not evaluated outside of quotes) by preceding them with a dollar sign ('$'). If a dollar character is needed inside double quotes, it must be escaped using a backslash. Strong quoting is achieved by surrounding single quotes ('') around the character or sequence of characters to protect. Inside single quotes, nothing is interpreted, it's the efficient way to quote regular expressions. As a result, here is the matrix indicating how special characters can be entered in different contexts (unprintable characters are replaced with their name within angle brackets). Note that some characters that may only be represented escaped have no possible representation inside single quotes, hence the '-' there: ``` | Character | Unquoted | Weakly quoted | Strongly quoted | | --- | --- | --- | --- | ### 2.3. Environment variables ``` HAProxy's configuration supports environment variables. Those variables are interpreted only within double quotes. Variables are expanded during the configuration parsing. Variable names must be preceded by a dollar ("$") and optionally enclosed with braces ("{}") similarly to what is done in Bourne shell. Variable names can contain alphanumerical characters or the character underscore ("_") but should not start with a digit. If the variable contains a list of several values separated by spaces, it can be expanded as individual arguments by enclosing the variable with braces and appending the suffix '[*]' before the closing brace. It is also possible to specify a default value to use when the variable is not set, by appending that value after a dash '-' next to the variable name. Note that the default value only replaces non existing variables, not empty ones. ``` Example: ``` bind "fd@${FD_APP1}" log "${LOCAL_SYSLOG-127.0.0.1}:514" local0 notice # send to local server user "$HAPROXY_USER" ``` ``` Some variables are defined by HAProxy, they can be used in the configuration file, or could be inherited by a program (See 3.7. Programs): * HAPROXY_LOCALPEER: defined at the startup of the process which contains the name of the local peer. (See "-L" in the management guide.) * HAPROXY_CFGFILES: list of the configuration files loaded by HAProxy, separated by semicolons. Can be useful in the case you specified a directory. * HAPROXY_MWORKER: In master-worker mode, this variable is set to 1. * HAPROXY_CLI: configured listeners addresses of the stats socket for every processes, separated by semicolons. * HAPROXY_MASTER_CLI: In master-worker mode, listeners addresses of the master CLI, separated by semicolons. In addition, some pseudo-variables are internally resolved and may be used as regular variables. Pseudo-variables always start with a dot ('.'), and are the only ones where the dot is permitted. The current list of pseudo-variables is: * .FILE: the name of the configuration file currently being parsed. * .LINE: the line number of the configuration file currently being parsed, starting at one. * .SECTION: the name of the section currently being parsed, or its type if the section doesn't have a name (e.g. "global"), or an empty string before the first section. These variables are resolved at the location where they are parsed. For example if a ".LINE" variable is used in a "[log-format](#log-format)" directive located in a defaults section, its line number will be resolved before parsing and compiling the "[log-format](#log-format)" directive, so this same line number will be reused by subsequent proxies. This way it is possible to emit information to help locate a rule in variables, logs, error statuses, health checks, header values, or even to use line numbers to name some config objects like servers for example. See also "[external-check command](#external-check%20command)" for other variables. ``` ### 2.4. Conditional blocks ``` It may sometimes be convenient to be able to conditionally enable or disable some arbitrary parts of the configuration, for example to enable/disable SSL or ciphers, enable or disable some pre-production listeners without modifying the configuration, or adjust the configuration's syntax to support two distinct versions of HAProxy during a migration.. HAProxy brings a set of nestable preprocessor-like directives which allow to integrate or ignore some blocks of text. These directives must be placed on their own line and they act on the lines that follow them. Two of them support an expression, the other ones only switch to an alternate block or end a current level. The 4 following directives are defined to form conditional blocks: - .if <condition> - .elif <condition> - .else - .endif The ".if" directive nests a new level, ".elif" stays at the same level, ".else" as well, and ".endif" closes a level. Each ".if" must be terminated by a matching ".endif". The ".elif" may only be placed after ".if" or ".elif", and there is no limit to the number of ".elif" that may be chained. There may be only one ".else" per ".if" and it must always be after the ".if" or the last ".elif" of a block. Comments may be placed on the same line if needed after a '#', they will be ignored. The directives are tokenized like other configuration directives, and as such it is possible to use environment variables in conditions. Conditions can also be evaluated on startup with the -cc parameter. See "3. Starting HAProxy" in the management doc. The conditions are either an empty string (which then returns false), or an expression made of any combination of: - the integer zero ('0'), always returns "false" - a non-nul integer (e.g. '1'), always returns "true". - a predicate optionally followed by argument(s) in parenthesis. - a condition placed between a pair of parenthesis '(' and ')' - an exclamation mark ('!') preceding any of the non-empty elements above, and which will negate its status. - expressions combined with a logical AND ('&&'), which will be evaluated from left to right until one returns false - expressions combined with a logical OR ('||'), which will be evaluated from right to left until one returns true Note that like in other languages, the AND operator has precedence over the OR operator, so that "A && B || C && D" evalues as "(A && B) || (C && D)". The list of currently supported predicates is the following: - defined(<name>) : returns true if an environment variable <name> exists, regardless of its contents - feature(<name>) : returns true if feature <name> is listed as present in the features list reported by "haproxy -vv" (which means a <name> appears after a '+') - streq(<str1>,<str2>) : returns true only if the two strings are equal - strneq(<str1>,<str2>) : returns true only if the two strings differ - version_atleast(<ver>): returns true if the current haproxy version is at least as recent as <ver> otherwise false. The version syntax is the same as shown by "haproxy -v" and missing components are assumed as being zero. - version_before(<ver>) : returns true if the current haproxy version is strictly older than <ver> otherwise false. The version syntax is the same as shown by "haproxy -v" and missing components are assumed as being zero. ``` Example: ``` .if defined(HAPROXY_MWORKER) listen mwcli_px bind :1111 ... .endif .if strneq("$SSL_ONLY",yes) bind :80 .endif .if streq("$WITH_SSL",yes) .if feature(OPENSSL) bind :443 ssl crt ... .endif .endif .if feature(OPENSSL) && (streq("$WITH_SSL",yes) || streq("$SSL_ONLY",yes)) bind :443 ssl crt ... .endif .if version_atleast(2.4-dev19) profiling.memory on .endif .if !feature(OPENSSL) .alert "SSL support is mandatory" .endif ``` ``` Four other directives are provided to report some status: - .diag "message" : emit this message only when in diagnostic mode (-dD) - .notice "message" : emit this message at level NOTICE - .warning "message" : emit this message at level WARNING - .alert "message" : emit this message at level ALERT Messages emitted at level WARNING may cause the process to fail to start if the "strict-mode" is enabled. Messages emitted at level ALERT will always cause a fatal error. These can be used to detect some inappropriate conditions and provide advice to the user. ``` Example: ``` .if "${A}" .if "${B}" .notice "A=1, B=1" .elif "${C}" .notice "A=1, B=0, C=1" .elif "${D}" .warning "A=1, B=0, C=0, D=1" .else .alert "A=1, B=0, C=0, D=0" .endif .else .notice "A=0" .endif .diag "WTA/2021-05-07: replace 'redirect' with 'return' after switch to 2.4" http-request redirect location /goaway if ABUSE ``` ### 2.5. Time format ``` Some parameters involve values representing time, such as timeouts. These values are generally expressed in milliseconds (unless explicitly stated otherwise) but may be expressed in any other unit by suffixing the unit to the numeric value. It is important to consider this because it will not be repeated for every keyword. Supported units are : - us : microseconds. 1 microsecond = 1/1000000 second - ms : milliseconds. 1 millisecond = 1/1000 second. This is the default. - s : seconds. 1s = 1000ms - m : minutes. 1m = 60s = 60000ms - h : hours. 1h = 60m = 3600s = 3600000ms - d : days. 1d = 24h = 1440m = 86400s = 86400000ms ``` ### 2.6. Examples ``` # Simple configuration for an HTTP proxy listening on port 80 on all # interfaces and forwarding requests to a single backend "servers" with a # single server "server1" listening on 127.0.0.1:8000 global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms frontend http-in bind *:80 default_backend servers backend servers server server1 127.0.0.1:8000 maxconn 32 # The same configuration defined with a single listen block. Shorter but # less expressive, especially in HTTP mode. global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms listen http-in bind *:80 server server1 127.0.0.1:8000 maxconn 32 Assuming haproxy is in $PATH, test these configurations in a shell with: $ sudo haproxy -f configuration.conf -c ``` 3. Global parameters --------------------- ``` Parameters in the "global" section are process-wide and often OS-specific. They are generally set once for all and do not need being changed once correct. Some of them have command-line equivalents. The following keywords are supported in the "global" section : * Process management and security - [51degrees-cache-size](#51degrees-cache-size) - [51degrees-data-file](#51degrees-data-file) - [51degrees-property-name-list](#51degrees-property-name-list) - [51degrees-property-separator](#51degrees-property-separator) - [ca-base](#ca-base) - [chroot](#chroot) - [cluster-secret](#cluster-secret) - [cpu-map](#cpu-map) - [crt-base](#crt-base) - [daemon](#daemon) - [default-path](#default-path) - [description](#description) - [deviceatlas-json-file](#deviceatlas-json-file) - [deviceatlas-log-level](#deviceatlas-log-level) - [deviceatlas-properties-cookie](#deviceatlas-properties-cookie) - [deviceatlas-separator](#deviceatlas-separator) - [expose-experimental-directives](#expose-experimental-directives) - [external-check](#external-check) - [fd-hard-limit](#fd-hard-limit) - [gid](#gid) - [grace](#grace) - [group](#group) - [h1-accept-payload-with-any-method](#h1-accept-payload-with-any-method) - [h1-case-adjust](#h1-case-adjust) - [h1-case-adjust-file](#h1-case-adjust-file) - [h2-workaround-bogus-websocket-clients](#h2-workaround-bogus-websocket-clients) - [hard-stop-after](#hard-stop-after) - [httpclient.resolvers.id](#httpclient.resolvers.id) - [httpclient.resolvers.prefer](#httpclient.resolvers.prefer) - [httpclient.ssl.ca-file](#httpclient.ssl.ca-file) - [httpclient.ssl.verify](#httpclient.ssl.verify) - [insecure-fork-wanted](#insecure-fork-wanted) - [insecure-setuid-wanted](#insecure-setuid-wanted) - [issuers-chain-path](#issuers-chain-path) - [localpeer](#localpeer) - [log](#log) - [log-send-hostname](#log-send-hostname) - [log-tag](#log-tag) - [lua-load](#lua-load) - [lua-load-per-thread](#lua-load-per-thread) - [lua-prepend-path](#lua-prepend-path) - [mworker-max-reloads](#mworker-max-reloads) - [nbthread](#nbthread) - [node](#node) - [numa-cpu-mapping](#numa-cpu-mapping) - [pidfile](#pidfile) - [pp2-never-send-local](#pp2-never-send-local) - [presetenv](#presetenv) - [resetenv](#resetenv) - [set-dumpable](#set-dumpable) - [set-var](#set-var) - [setenv](#setenv) - [ssl-default-bind-ciphers](#ssl-default-bind-ciphers) - [ssl-default-bind-ciphersuites](#ssl-default-bind-ciphersuites) - [ssl-default-bind-curves](#ssl-default-bind-curves) - [ssl-default-bind-options](#ssl-default-bind-options) - [ssl-default-server-ciphers](#ssl-default-server-ciphers) - [ssl-default-server-ciphersuites](#ssl-default-server-ciphersuites) - [ssl-default-server-options](#ssl-default-server-options) - [ssl-dh-param-file](#ssl-dh-param-file) - [ssl-propquery](#ssl-propquery) - [ssl-provider](#ssl-provider) - [ssl-provider-path](#ssl-provider-path) - [ssl-server-verify](#ssl-server-verify) - [ssl-skip-self-issued-ca](#ssl-skip-self-issued-ca) - [stats](#stats) - [strict-limits](#strict-limits) - [uid](#uid) - [ulimit-n](#ulimit-n) - [unix-bind](#unix-bind) - [unsetenv](#unsetenv) - [user](#user) - [wurfl-cache-size](#wurfl-cache-size) - [wurfl-data-file](#wurfl-data-file) - [wurfl-information-list](#wurfl-information-list) - [wurfl-information-list-separator](#wurfl-information-list-separator) * Performance tuning - [busy-polling](#busy-polling) - [max-spread-checks](#max-spread-checks) - [maxcompcpuusage](#maxcompcpuusage) - [maxcomprate](#maxcomprate) - [maxconn](#maxconn) - [maxconnrate](#maxconnrate) - [maxpipes](#maxpipes) - [maxsessrate](#maxsessrate) - [maxsslconn](#maxsslconn) - [maxsslrate](#maxsslrate) - [maxzlibmem](#maxzlibmem) - [no-memory-trimming](#no-memory-trimming) - [noepoll](#noepoll) - [noevports](#noevports) - [nogetaddrinfo](#nogetaddrinfo) - [nokqueue](#nokqueue) - [nopoll](#nopoll) - [noreuseport](#noreuseport) - [nosplice](#nosplice) - [profiling.tasks](#profiling.tasks) - [server-state-base](#server-state-base) - [server-state-file](#server-state-file) - [spread-checks](#spread-checks) - [ssl-engine](#ssl-engine) - [ssl-mode-async](#ssl-mode-async) - [tune.buffers.limit](#tune.buffers.limit) - [tune.buffers.reserve](#tune.buffers.reserve) - [tune.bufsize](#tune.bufsize) - [tune.comp.maxlevel](#tune.comp.maxlevel) - [tune.fd.edge-triggered](#tune.fd.edge-triggered) - [tune.h2.header-table-size](#tune.h2.header-table-size) - [tune.h2.initial-window-size](#tune.h2.initial-window-size) - [tune.h2.max-concurrent-streams](#tune.h2.max-concurrent-streams) - [tune.http.cookielen](#tune.http.cookielen) - [tune.http.logurilen](#tune.http.logurilen) - [tune.http.maxhdr](#tune.http.maxhdr) - [tune.idle-pool.shared](#tune.idle-pool.shared) - [tune.idletimer](#tune.idletimer) - [tune.lua.forced-yield](#tune.lua.forced-yield) - [tune.lua.maxmem](#tune.lua.maxmem) - [tune.lua.service-timeout](#tune.lua.service-timeout) - [tune.lua.session-timeout](#tune.lua.session-timeout) - [tune.lua.task-timeout](#tune.lua.task-timeout) - [tune.maxaccept](#tune.maxaccept) - [tune.maxpollevents](#tune.maxpollevents) - [tune.maxrewrite](#tune.maxrewrite) - [tune.pattern.cache-size](#tune.pattern.cache-size) - [tune.peers.max-updates-at-once](#tune.peers.max-updates-at-once) - [tune.pipesize](#tune.pipesize) - [tune.pool-high-fd-ratio](#tune.pool-high-fd-ratio) - [tune.pool-low-fd-ratio](#tune.pool-low-fd-ratio) - [tune.quic.frontend.conn-tx-buffers.limit](#tune.quic.frontend.conn-tx-buffers.limit) - [tune.quic.frontend.max-idle-timeout](#tune.quic.frontend.max-idle-timeout) - [tune.quic.frontend.max-streams-bidi](#tune.quic.frontend.max-streams-bidi) - [tune.quic.retry-threshold](#tune.quic.retry-threshold) - [tune.quic.socket-owner](#tune.quic.socket-owner) - [tune.rcvbuf.client](#tune.rcvbuf.client) - [tune.rcvbuf.server](#tune.rcvbuf.server) - [tune.recv\_enough](#tune.recv_enough) - [tune.runqueue-depth](#tune.runqueue-depth) - [tune.sched.low-latency](#tune.sched.low-latency) - [tune.sndbuf.client](#tune.sndbuf.client) - [tune.sndbuf.server](#tune.sndbuf.server) - [tune.ssl.cachesize](#tune.ssl.cachesize) - [tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size) - tune.ssl.capture-cipherlist-size (deprecated) - [tune.ssl.default-dh-param](#tune.ssl.default-dh-param) - [tune.ssl.force-private-cache](#tune.ssl.force-private-cache) - [tune.ssl.hard-maxrecord](#tune.ssl.hard-maxrecord) - [tune.ssl.keylog](#tune.ssl.keylog) - [tune.ssl.lifetime](#tune.ssl.lifetime) - [tune.ssl.maxrecord](#tune.ssl.maxrecord) - [tune.ssl.ssl-ctx-cache-size](#tune.ssl.ssl-ctx-cache-size) - [tune.vars.global-max-size](#tune.vars.global-max-size) - [tune.vars.proc-max-size](#tune.vars.proc-max-size) - [tune.vars.reqres-max-size](#tune.vars.reqres-max-size) - [tune.vars.sess-max-size](#tune.vars.sess-max-size) - [tune.vars.txn-max-size](#tune.vars.txn-max-size) - [tune.zlib.memlevel](#tune.zlib.memlevel) - [tune.zlib.windowsize](#tune.zlib.windowsize) * Debugging - [anonkey](#anonkey) - [quiet](#quiet) - [zero-warning](#zero-warning) ``` ### 3.1. Process management and security **51degrees-data-file** <file path> ``` The path of the 51Degrees data file to provide device detection services. The file should be unzipped and accessible by HAProxy with relevant permissions. Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. ``` **51degrees-property-name-list** [<string> ...] ``` A list of 51Degrees property names to be load from the dataset. A full list of names is available on the 51Degrees website: https://51degrees.com/resources/property-dictionary Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. ``` **51degrees-property-separator** <char> ``` A char that will be appended to every property value in a response header containing 51Degrees results. If not set that will be set as ','. Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. ``` **51degrees-cache-size** <number> ``` Sets the size of the 51Degrees converter cache to <number> entries. This is an LRU cache which reminds previous device detections and their results. By default, this cache is disabled. Please note that this option is only available when HAProxy has been compiled with USE_51DEGREES. ``` **ca-base** <dir> ``` Assigns a default directory to fetch SSL CA certificates and CRLs from when a relative path is used with "ca-file", "[ca-verify-file](#ca-verify-file)" or "crl-file" directives. Absolute locations specified in "ca-file", "[ca-verify-file](#ca-verify-file)" and "crl-file" prevail and ignore "[ca-base](#ca-base)". ``` **chroot** <jail dir> ``` Changes current directory to <jail dir> and performs a chroot() there before dropping privileges. This increases the security level in case an unknown vulnerability would be exploited, since it would make it very hard for the attacker to exploit the system. This only works when the process is started with superuser privileges. It is important to ensure that <jail_dir> is both empty and non-writable to anyone. ``` **close-spread-time** <time> ``` Define a time window during which idle connections and active connections closing is spread in case of soft-stop. After a SIGUSR1 is received and the grace period is over (if any), the idle connections will all be closed at once if this option is not set, and active HTTP or HTTP2 connections will be ended after the next request is received, either by appending a "Connection: close" line to the HTTP response, or by sending a GOAWAY frame in case of HTTP2. When this option is set, connection closing will be spread over this set <time>. If the close-spread-time is set to "infinite", active connection closing during a soft-stop will be disabled. The "Connection: close" header will not be added to HTTP responses (or GOAWAY for HTTP2) anymore and idle connections will only be closed once their timeout is reached (based on the various timeouts set in the configuration). ``` Arguments : ``` <time> is a time window (by default in milliseconds) during which connection closing will be spread during a soft-stop operation, or "infinite" if active connection closing should be disabled. ``` ``` It is recommended to set this setting to a value lower than the one used in the "[hard-stop-after](#hard-stop-after)" option if this one is used, so that all connections have a chance to gracefully close before the process stops. ``` **See also:** grace, hard-stop-after, idle-close-on-response **cluster-secret** <secret> ``` Define an ASCII string secret shared between several nodes belonging to the same cluster. It could be used for different usages. It is at least used to derive stateless reset tokens for all the QUIC connections instantiated by this process. This is also the case to derive secrets used to encrypt Retry tokens. If this parameter is not set, a random value will be selected on process startup. This allows to use features which rely on it, albeit with some limitations. ``` **cpu-map** [auto:]<thread-group>[/<thread-set>] <cpu-set>... ``` On some operating systems, it is possible to bind a thread group or a thread to a specific CPU set. This means that the designated threads will never run on other CPUs. The "[cpu-map](#cpu-map)" directive specifies CPU sets for individual threads or thread groups. The first argument is a thread group range, optionally followed by a thread set. These ranges have the following format: all | odd | even | number[-[number]] <number> must be a number between 1 and 32 or 64, depending on the machine's word size. Any group IDs above 'thread-groups' and any thread IDs above the machine's word size are ignored. All thread numbers are relative to the group they belong to. It is possible to specify a range with two such number delimited by a dash ('-'). It also is possible to specify all threads at once using "all", only odd numbers using "[odd](#odd)" or even numbers using "[even](#even)", just like with the "thread" bind directive. The second and forthcoming arguments are CPU sets. Each CPU set is either a unique number starting at 0 for the first CPU or a range with two such numbers delimited by a dash ('-'). Outside of Linux and BSDs, there may be a limitation on the maximum CPU index to either 31 or 63. Multiple CPU numbers or ranges may be specified, and the processes or threads will be allowed to bind to all of them. Obviously, multiple "[cpu-map](#cpu-map)" directives may be specified. Each "[cpu-map](#cpu-map)" directive will replace the previous ones when they overlap. Ranges can be partially defined. The higher bound can be omitted. In such case, it is replaced by the corresponding maximum value, 32 or 64 depending on the machine's word size. The prefix "auto:" can be added before the thread set to let HAProxy automatically bind a set of threads to a CPU by incrementing threads and CPU sets. To be valid, both sets must have the same size. No matter the declaration order of the CPU sets, it will be bound from the lowest to the highest bound. Having both a group and a thread range with the "auto:" prefix is not supported. Only one range is supported, the other one must be a fixed number. Note that group ranges are supported for historical reasons. Nowadays, a lone number designates a thread group and must be 1 if thread-groups are not used, and specifying a thread range or number requires to prepend "1/" in front of it if thread groups are not used. Finally, "1" is strictly equivalent to "1/all" and designates all threads in the group. ``` Examples: ``` cpu-map 1/all 0-3 # bind all threads of the first group on the # first 4 CPUs cpu-map 1/1- 0- # will be replaced by "cpu-map 1/1-64 0-63" # or "cpu-map 1/1-32 0-31" depending on the machine's # word size. # all these lines bind thread 1 to the cpu 0, the thread 2 to cpu 1 # and so on. cpu-map auto:1/1-4 0-3 cpu-map auto:1/1-4 0-1 2-3 cpu-map auto:1/1-4 3 2 1 0 # bind each thread to exactly one CPU using all/odd/even keyword cpu-map auto:1/all 0-63 cpu-map auto:1/even 0-31 cpu-map auto:1/odd 32-63 # invalid cpu-map because thread and CPU sets have different sizes. cpu-map auto:1/1-4 0 # invalid cpu-map auto:1/1 0-3 # invalid # map 40 threads of those 4 groups to individual CPUs cpu-map auto:1/1-10 0-9 cpu-map auto:2/1-10 10-19 cpu-map auto:3/1-10 20-29 cpu-map auto:4/1-10 30-39 # Map 80 threads to one physical socket and 80 others to another socket # without forcing assignment. These are split into 4 groups since no # group may have more than 64 threads. cpu-map 1/1-40 0-39 80-119 # node0, siblings 0 & 1 cpu-map 2/1-40 0-39 80-119 cpu-map 3/1-40 40-79 120-159 # node1, siblings 0 & 1 cpu-map 4/1-40 40-79 120-159 ``` **crt-base** <dir> ``` Assigns a default directory to fetch SSL certificates from when a relative path is used with "crtfile" or "crt" directives. Absolute locations specified prevail and ignore "[crt-base](#crt-base)". ``` **daemon** ``` Makes the process fork into background. This is the recommended mode of operation. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. This option is ignored in systemd mode. ``` **default-path** { current | config | parent | origin <path> } ``` By default HAProxy loads all files designated by a relative path from the location the process is started in. In some circumstances it might be desirable to force all relative paths to start from a different location just as if the process was started from such locations. This is what this directive is made for. Technically it will perform a temporary chdir() to the designated location while processing each configuration file, and will return to the original directory after processing each file. It takes an argument indicating the policy to use when loading files whose path does not start with a slash ('/'): - "current" indicates that all relative files are to be loaded from the directory the process is started in ; this is the default. - "config" indicates that all relative files should be loaded from the directory containing the configuration file. More specifically, if the configuration file contains a slash ('/'), the longest part up to the last slash is used as the directory to change to, otherwise the current directory is used. This mode is convenient to bundle maps, errorfiles, certificates and Lua scripts together as relocatable packages. When multiple configuration files are loaded, the directory is updated for each of them. - "parent" indicates that all relative files should be loaded from the parent of the directory containing the configuration file. More specifically, if the configuration file contains a slash ('/'), ".." is appended to the longest part up to the last slash is used as the directory to change to, otherwise the directory is "..". This mode is convenient to bundle maps, errorfiles, certificates and Lua scripts together as relocatable packages, but where each part is located in a different subdirectory (e.g. "config/", "certs/", "maps/", ...). - "origin" indicates that all relative files should be loaded from the designated (mandatory) path. This may be used to ease management of different HAProxy instances running in parallel on a system, where each instance uses a different prefix but where the rest of the sections are made easily relocatable. Each "[default-path](#default-path)" directive instantly replaces any previous one and will possibly result in switching to a different directory. While this should always result in the desired behavior, it is really not a good practice to use multiple default-path directives, and if used, the policy ought to remain consistent across all configuration files. Warning: some configuration elements such as maps or certificates are uniquely identified by their configured path. By using a relocatable layout, it becomes possible for several of them to end up with the same unique name, making it difficult to update them at run time, especially when multiple configuration files are loaded from different directories. It is essential to observe a strict collision-free file naming scheme before adopting relative paths. A robust approach could consist in prefixing all files names with their respective site name, or in doing so at the directory level. ``` **description** <text> ``` Add a text that describes the instance. Please note that it is required to escape certain characters (# for example) and this text is inserted into a html page so you should avoid using "<" and ">" characters. ``` **deviceatlas-json-file** <path> ``` Sets the path of the DeviceAtlas JSON data file to be loaded by the API. The path must be a valid JSON data file and accessible by HAProxy process. ``` **deviceatlas-log-level** <value> ``` Sets the level of information returned by the API. This directive is optional and set to 0 by default if not set. ``` **deviceatlas-properties-cookie** <name> ``` Sets the client cookie's name used for the detection if the DeviceAtlas Client-side component was used during the request. This directive is optional and set to DAPROPS by default if not set. ``` **deviceatlas-separator** <char> ``` Sets the character separator for the API properties results. This directive is optional and set to | by default if not set. ``` **expose-experimental-directives** ``` This statement must appear before using directives tagged as experimental or the config file will be rejected. ``` **external-check** ``` Allows the use of an external agent to perform health checks. This is disabled by default as a security precaution, and even when enabled, checks may still fail unless "[insecure-fork-wanted](#insecure-fork-wanted)" is enabled as well. If the program launched makes use of a setuid executable (it should really not), you may also need to set "[insecure-setuid-wanted](#insecure-setuid-wanted)" in the global section. See "[option external-check](#option%20external-check)", and "[insecure-fork-wanted](#insecure-fork-wanted)", and "[insecure-setuid-wanted](#insecure-setuid-wanted)". ``` **fd-hard-limit** <number> ``` Sets an upper bound to the maximum number of file descriptors that the process will use, regardless of system limits. While "[ulimit-n](#ulimit-n)" and "maxconn" may be used to enforce a value, when they are not set, the process will be limited to the hard limit of the RLIMIT_NOFILE setting as reported by "ulimit -n -H". But some modern operating systems are now allowing extremely large values here (in the order of 1 billion), which will consume way too much RAM for regular usage. The fd-hard-limit setting is provided to enforce a possibly lower bound to this limit. This means that it will always respect the system-imposed limits when they are below <number> but the specified value will be used if system-imposed limits are higher. In the example below, no other setting is specified and the maxconn value will automatically adapt to the lower of "[fd-hard-limit](#fd-hard-limit)" and the system-imposed limit: global # use as many FDs as possible but no more than 50000 fd-hard-limit 50000 ``` **See also:** ulimit-n, maxconn **gid** <number> ``` Changes the process's group ID to <number>. It is recommended that the group ID is dedicated to HAProxy or to a small set of similar daemons. HAProxy must be started with a user belonging to this group, or with superuser privileges. Note that if HAProxy is started from a user having supplementary groups, it will only be able to drop these groups if started with superuser privileges. See also "group" and "uid". ``` **grace** <time> ``` Defines a delay between SIGUSR1 and real soft-stop. ``` Arguments : ``` <time> is an extra delay (by default in milliseconds) after receipt of the SIGUSR1 signal that will be waited for before proceeding with the soft-stop operation. ``` ``` This is used for compatibility with legacy environments where the haproxy process needs to be stopped but some external components need to detect the status before listeners are unbound. The principle is that the internal "[stopping](#stopping)" variable (which is reported by the "[stopping](#stopping)" sample fetch function) will be turned to true, but listeners will continue to accept connections undisturbed, until the delay expires, after what the regular soft-stop will proceed. This must not be used with processes that are reloaded, or this will prevent the old process from unbinding, and may prevent the new one from starting, or simply cause trouble. ``` Example: ``` global grace 10s # Returns 200 OK until stopping is set via SIGUSR1 frontend ext-check bind :9999 monitor-uri /ext-check monitor fail if { stopping } ``` ``` Please note that a more flexible and durable approach would instead consist for an orchestration system in setting a global variable from the CLI, use that variable to respond to external checks, then after a delay send the SIGUSR1 signal. ``` Example: ``` # Returns 200 OK until proc.stopping is set to non-zero. May be done # from HTTP using set-var(proc.stopping) or from the CLI using: # > set var proc.stopping int(1) frontend ext-check bind :9999 monitor-uri /ext-check monitor fail if { var(proc.stopping) -m int gt 0 } ``` **See also:** hard-stop-after, monitor **group** <group name> ``` Similar to "gid" but uses the GID of group name <group name> from /etc/group. See also "gid" and "user". ``` **h1-accept-payload-with-any-method** ``` Does not reject HTTP/1.0 GET/HEAD/DELETE requests with a payload. While It is explicitly allowed in HTTP/1.1, HTTP/1.0 is not clear on this point and some old servers don't expect any payload and never look for body length (via Content-Length or Transfer-Encoding headers). It means that some intermediaries may properly handle the payload for HTTP/1.0 GET/HEAD/DELETE requests, while some others may totally ignore it. That may lead to security issues because a request smuggling attack is possible. Thus, by default, HAProxy rejects HTTP/1.0 GET/HEAD/DELETE requests with a payload. However, it may be an issue with some old clients. In this case, this global option may be set. ``` **h1-case-adjust** <from> <to> ``` Defines the case adjustment to apply, when enabled, to the header name <from>, to change it to <to> before sending it to HTTP/1 clients or servers. <from> must be in lower case, and <from> and <to> must not differ except for their case. It may be repeated if several header names need to be adjusted. Duplicate entries are not allowed. If a lot of header names have to be adjusted, it might be more convenient to use "[h1-case-adjust-file](#h1-case-adjust-file)". Please note that no transformation will be applied unless "option h1-case-adjust-bogus-client" or "[option h1-case-adjust-bogus-server](#option%20h1-case-adjust-bogus-server)" is specified in a proxy. There is no standard case for header names because, as stated in RFC7230, they are case-insensitive. So applications must handle them in a case- insensitive manner. But some bogus applications violate the standards and erroneously rely on the cases most commonly used by browsers. This problem becomes critical with HTTP/2 because all header names must be exchanged in lower case, and HAProxy follows the same convention. All header names are sent in lower case to clients and servers, regardless of the HTTP version. Applications which fail to properly process requests or responses may require to temporarily use such workarounds to adjust header names sent to them for the time it takes the application to be fixed. Please note that an application which requires such workarounds might be vulnerable to content smuggling attacks and must absolutely be fixed. ``` Example: ``` global h1-case-adjust content-length Content-Length ``` ``` See "[h1-case-adjust-file](#h1-case-adjust-file)", "[option h1-case-adjust-bogus-client](#option%20h1-case-adjust-bogus-client)" and "[option h1-case-adjust-bogus-server](#option%20h1-case-adjust-bogus-server)". ``` **h1-case-adjust-file** <hdrs-file> ``` Defines a file containing a list of key/value pairs used to adjust the case of some header names before sending them to HTTP/1 clients or servers. The file <hdrs-file> must contain 2 header names per line. The first one must be in lower case and both must not differ except for their case. Lines which start with '#' are ignored, just like empty lines. Leading and trailing tabs and spaces are stripped. Duplicate entries are not allowed. Please note that no transformation will be applied unless "[option h1-case-adjust-bogus-client](#option%20h1-case-adjust-bogus-client)" or "[option h1-case-adjust-bogus-server](#option%20h1-case-adjust-bogus-server)" is specified in a proxy. If this directive is repeated, only the last one will be processed. It is an alternative to the directive "[h1-case-adjust](#h1-case-adjust)" if a lot of header names need to be adjusted. Please read the risks associated with using this. See "[h1-case-adjust](#h1-case-adjust)", "[option h1-case-adjust-bogus-client](#option%20h1-case-adjust-bogus-client)" and "[option h1-case-adjust-bogus-server](#option%20h1-case-adjust-bogus-server)". ``` **h2-workaround-bogus-websocket-clients** ``` This disables the announcement of the support for h2 websockets to clients. This can be use to overcome clients which have issues when implementing the relatively fresh RFC8441, such as Firefox 88. To allow clients to automatically downgrade to http/1.1 for the websocket tunnel, specify h2 support on the bind line using "alpn" without an explicit "proto" keyword. If this statement was previously activated, this can be disabled by prefixing the keyword with "no'. ``` **hard-stop-after** <time> ``` Defines the maximum time allowed to perform a clean soft-stop. ``` Arguments : ``` <time> is the maximum time (by default in milliseconds) for which the instance will remain alive when a soft-stop is received via the SIGUSR1 signal. ``` ``` This may be used to ensure that the instance will quit even if connections remain opened during a soft-stop (for example with long timeouts for a proxy in tcp mode). It applies both in TCP and HTTP mode. ``` Example: ``` global hard-stop-after 30s ``` **See also:** grace **httpclient.resolvers.id** <resolvers id> ``` This option defines the resolvers section with which the httpclient will try to resolve. Default option is the "default" resolvers ID. By default, if this option is not used, it will simply disable the resolving if the section is not found. However, when this option is explicitly enabled it will trigger a configuration error if it fails to load. ``` **httpclient.resolvers.prefer** <ipv4|ipv6> ``` This option allows to chose which family of IP you want when resolving, which is convenient when IPv6 is not available on your network. Default option is "[ipv6](#ipv6)". ``` **httpclient.ssl.ca-file** <cafile> ``` This option defines the ca-file which should be used to verify the server certificate. It takes the same parameters as the "ca-file" option on the server line. By default and when this option is not used, the value is "@system-ca" which tries to load the CA of the system. If it fails the SSL will be disabled for the httpclient. However, when this option is explicitly enabled it will trigger a configuration error if it fails. ``` **httpclient.ssl.verify** [none|required] ``` Works the same way as the verify option on server lines. If specified to 'none', servers certificates are not verified. Default option is "required". By default and when this option is not used, the value is "required". If it fails the SSL will be disabled for the httpclient. However, when this option is explicitly enabled it will trigger a configuration error if it fails. ``` **insecure-fork-wanted** ``` By default HAProxy tries hard to prevent any thread and process creation after it starts. Doing so is particularly important when using Lua files of uncertain origin, and when experimenting with development versions which may still contain bugs whose exploitability is uncertain. And generally speaking it's good hygiene to make sure that no unexpected background activity can be triggered by traffic. But this prevents external checks from working, and may break some very specific Lua scripts which actively rely on the ability to fork. This option is there to disable this protection. Note that it is a bad idea to disable it, as a vulnerability in a library or within HAProxy itself will be easier to exploit once disabled. In addition, forking from Lua or anywhere else is not reliable as the forked process may randomly embed a lock set by another thread and never manage to finish an operation. As such it is highly recommended that this option is never used and that any workload requiring such a fork be reconsidered and moved to a safer solution (such as agents instead of external checks). This option supports the "no" prefix to disable it. ``` **insecure-setuid-wanted** ``` HAProxy doesn't need to call executables at run time (except when using external checks which are strongly recommended against), and is even expected to isolate itself into an empty chroot. As such, there basically is no valid reason to allow a setuid executable to be called without the user being fully aware of the risks. In a situation where HAProxy would need to call external checks and/or disable chroot, exploiting a vulnerability in a library or in HAProxy itself could lead to the execution of an external program. On Linux it is possible to lock the process so that any setuid bit present on such an executable is ignored. This significantly reduces the risk of privilege escalation in such a situation. This is what HAProxy does by default. In case this causes a problem to an external check (for example one which would need the "ping" command), then it is possible to disable this protection by explicitly adding this directive in the global section. If enabled, it is possible to turn it back off by prefixing it with the "no" keyword. ``` **issuers-chain-path** <dir> ``` Assigns a directory to load certificate chain for issuer completion. All files must be in PEM format. For certificates loaded with "crt" or "[crt-list](#crt-list)", if certificate chain is not included in PEM (also commonly known as intermediate certificate), HAProxy will complete chain if the issuer of the certificate corresponds to the first certificate of the chain loaded with "[issuers-chain-path](#issuers-chain-path)". A "crt" file with PrivateKey+Certificate+IntermediateCA2+IntermediateCA1 could be replaced with PrivateKey+Certificate. HAProxy will complete the chain if a file with IntermediateCA2+IntermediateCA1 is present in "[issuers-chain-path](#issuers-chain-path)" directory. All other certificates with the same issuer will share the chain in memory. ``` **localpeer** <name> ``` Sets the local instance's peer name. It will be ignored if the "-L" command line argument is specified or if used after "[peers](#peers)" section definitions. In such cases, a warning message will be emitted during the configuration parsing. This option will also set the HAPROXY_LOCALPEER environment variable. See also "-L" in the management guide and "[peers](#peers)" section below. ``` **log** <address> [len <length>] [format <format>] [sample <ranges>:<sample\_size>] <facility> [max level [min level]] ``` Adds a global syslog server. Several global servers can be defined. They will receive logs for starts and exits, as well as all logs from proxies configured with "log global". <address> can be one of: - An IPv4 address optionally followed by a colon and a UDP port. If no port is specified, 514 is used by default (the standard syslog port). - An IPv6 address followed by a colon and optionally a UDP port. If no port is specified, 514 is used by default (the standard syslog port). - A filesystem path to a datagram UNIX domain socket, keeping in mind considerations for chroot (be sure the path is accessible inside the chroot) and uid/gid (be sure the path is appropriately writable). - A file descriptor number in the form "fd@<number>", which may point to a pipe, terminal, or socket. In this case unbuffered logs are used and one writev() call per log is performed. This is a bit expensive but acceptable for most workloads. Messages sent this way will not be truncated but may be dropped, in which case the DroppedLogs counter will be incremented. The writev() call is atomic even on pipes for messages up to PIPE_BUF size, which POSIX recommends to be at least 512 and which is 4096 bytes on most modern operating systems. Any larger message may be interleaved with messages from other processes. Exceptionally for debugging purposes the file descriptor may also be directed to a file, but doing so will significantly slow HAProxy down as non-blocking calls will be ignored. Also there will be no way to purge nor rotate this file without restarting the process. Note that the configured syslog format is preserved, so the output is suitable for use with a TCP syslog server. See also the "short" and "raw" format below. - "stdout" / "stderr", which are respectively aliases for "fd@1" and "fd@2", see above. - A ring buffer in the form "ring@<name>", which will correspond to an in-memory ring buffer accessible over the CLI using the "show events" command, which will also list existing rings and their sizes. Such buffers are lost on reload or restart but when used as a complement this can help troubleshooting by having the logs instantly available. You may want to reference some environment variables in the address parameter, see [section 2.3](#2.3) about environment variables. <length> is an optional maximum line length. Log lines larger than this value will be truncated before being sent. The reason is that syslog servers act differently on log line length. All servers support the default value of 1024, but some servers simply drop larger lines while others do log them. If a server supports long lines, it may make sense to set this value here in order to avoid truncating long lines. Similarly, if a server drops long lines, it is preferable to truncate them before sending them. Accepted values are 80 to 65535 inclusive. The default value of 1024 is generally fine for all standard usages. Some specific cases of long captures or JSON-formatted logs may require larger values. You may also need to increase "[tune.http.logurilen](#tune.http.logurilen)" if your request URIs are truncated. <format> is the log format used when generating syslog messages. It may be one of the following : local Analog to rfc3164 syslog message format except that hostname field is stripped. This is the default. Note: option "[log-send-hostname](#log-send-hostname)" switches the default to rfc3164. rfc3164 The RFC3164 syslog message format. (https://tools.ietf.org/html/rfc3164) rfc5424 The RFC5424 syslog message format. (https://tools.ietf.org/html/rfc5424) priority A message containing only a level plus syslog facility between angle brackets such as '<63>', followed by the text. The PID, date, time, process name and system name are omitted. This is designed to be used with a local log server. short A message containing only a level between angle brackets such as '<3>', followed by the text. The PID, date, time, process name and system name are omitted. This is designed to be used with a local log server. This format is compatible with what the systemd logger consumes. timed A message containing only a level between angle brackets such as '<3>', followed by ISO date and by the text. The PID, process name and system name are omitted. This is designed to be used with a local log server. iso A message containing only the ISO date, followed by the text. The PID, process name and system name are omitted. This is designed to be used with a local log server. raw A message containing only the text. The level, PID, date, time, process name and system name are omitted. This is designed to be used in containers or during development, where the severity only depends on the file descriptor used (stdout/stderr). <ranges> A list of comma-separated ranges to identify the logs to sample. This is used to balance the load of the logs to send to the log server. The limits of the ranges cannot be null. They are numbered from 1. The size or period (in number of logs) of the sample must be set with <sample_size> parameter. <sample_size> The size of the sample in number of logs to consider when balancing their logging loads. It is used to balance the load of the logs to send to the syslog server. This size must be greater or equal to the maximum of the high limits of the ranges. (see also <ranges> parameter). <facility> must be one of the 24 standard syslog facilities : kern user mail daemon auth syslog lpr news uucp cron auth2 ftp ntp audit alert cron2 local0 local1 local2 local3 local4 local5 local6 local7 Note that the facility is ignored for the "short" and "raw" formats, but still required as a positional field. It is recommended to use "[daemon](#daemon)" in this case to make it clear that it's only supposed to be used locally. An optional level can be specified to filter outgoing messages. By default, all messages are sent. If a maximum level is specified, only messages with a severity at least as important as this level will be sent. An optional minimum level can be specified. If it is set, logs emitted with a more severe level than this one will be capped to this level. This is used to avoid sending "emerg" messages on all terminals on some default syslog configurations. Eight levels are known : emerg alert crit err warning notice info debug ``` **log-send-hostname** [<string>] ``` Sets the hostname field in the syslog header. If optional "string" parameter is set the header is set to the string contents, otherwise uses the hostname of the system. Generally used if one is not relaying logs through an intermediate syslog server or for simply customizing the hostname printed in the logs. ``` **log-tag** <string> ``` Sets the tag field in the syslog header to this string. It defaults to the program name as launched from the command line, which usually is "haproxy". Sometimes it can be useful to differentiate between multiple processes running on the same host. See also the per-proxy "log-tag" directive. ``` **lua-load** <file> [ <arg1> [ <arg2> [ ... ] ] ] ``` This global directive loads and executes a Lua file in the shared context that is visible to all threads. Any variable set in such a context is visible from any thread. This is the easiest and recommended way to load Lua programs but it will not scale well if a lot of Lua calls are performed, as only one thread may be running on the global state at a time. A program loaded this way will always see 0 in the "core.thread" variable. This directive can be used multiple times. args are available in the lua file using the code below in the body of the file. Do not forget that Lua arrays start at index 1. A "local" variable declared in a file is available in the entire file and not available on other files. local args = table.pack(...) ``` **lua-load-per-thread** <file> [ <arg1> [ <arg2> [ ... ] ] ] ``` This global directive loads and executes a Lua file into each started thread. Any global variable has a thread-local visibility so that each thread could see a different value. As such it is strongly recommended not to use global variables in programs loaded this way. An independent copy is loaded and initialized for each thread, everything is done sequentially and in the thread's numeric order from 1 to nbthread. If some operations need to be performed only once, the program should check the "core.thread" variable to figure what thread is being initialized. Programs loaded this way will run concurrently on all threads and will be highly scalable. This is the recommended way to load simple functions that register sample-fetches, converters, actions or services once it is certain the program doesn't depend on global variables. For the sake of simplicity, the directive is available even if only one thread is used and even if threads are disabled (in which case it will be equivalent to lua-load). This directive can be used multiple times. See lua-load for usage of args. ``` **lua-prepend-path** <string> [<type>] ``` Prepends the given string followed by a semicolon to Lua's package.<type> variable. <type> must either be "[path](#path)" or "cpath". If <type> is not given it defaults to "[path](#path)". Lua's paths are semicolon delimited lists of patterns that specify how the `require` function attempts to find the source file of a library. Question marks (?) within a pattern will be replaced by module name. The path is evaluated left to right. This implies that paths that are prepended later will be checked earlier. As an example by specifying the following path: lua-prepend-path /usr/share/haproxy-lua/?/init.lua lua-prepend-path /usr/share/haproxy-lua/?.lua When `require "example"` is being called Lua will first attempt to load the /usr/share/haproxy-lua/example.lua script, if that does not exist the /usr/share/haproxy-lua/example/init.lua will be attempted and the default paths if that does not exist either. See https://www.lua.org/pil/8.1.html for the details within the Lua documentation. ``` **master-worker** [no-exit-on-failure] ``` Master-worker mode. It is equivalent to the command line "-W" argument. This mode will launch a "master" which will monitor the "workers". Using this mode, you can reload HAProxy directly by sending a SIGUSR2 signal to the master. The master-worker mode is compatible either with the foreground or daemon mode. By default, if a worker exits with a bad return code, in the case of a segfault for example, all workers will be killed, and the master will leave. It is convenient to combine this behavior with Restart=on-failure in a systemd unit file in order to relaunch the whole process. If you don't want this behavior, you must use the keyword "no-exit-on-failure". See also "-W" in the management guide. ``` **mworker-max-reloads** <number> ``` In master-worker mode, this option limits the number of time a worker can survive to a reload. If the worker did not leave after a reload, once its number of reloads is greater than this number, the worker will receive a SIGTERM. This option helps to keep under control the number of workers. See also "show proc" in the Management Guide. ``` **nbthread** <number> ``` This setting is only available when support for threads was built in. It makes HAProxy run on <number> threads. "[nbthread](#nbthread)" also works when HAProxy is started in foreground. On some platforms supporting CPU affinity, the default "[nbthread](#nbthread)" value is automatically set to the number of CPUs the process is bound to upon startup. This means that the thread count can easily be adjusted from the calling process using commands like "taskset" or "cpuset". Otherwise, this value defaults to 1. The default value is reported in the output of "haproxy -vv". ``` **numa-cpu-mapping** ``` If running on a NUMA-aware platform, HAProxy inspects on startup the CPU topology of the machine. If a multi-socket machine is detected, the affinity is automatically calculated to run on the CPUs of a single node. This is done in order to not suffer from the performance penalties caused by the inter-socket bus latency. However, if the applied binding is non optimal on a particular architecture, it can be disabled with the statement 'no numa-cpu-mapping'. This automatic binding is also not applied if a nbthread statement is present in the configuration, or the affinity of the process is already specified, for example via the 'cpu-map' directive or the taskset utility. ``` **pidfile** <pidfile> ``` Writes PIDs of all daemons into file <pidfile> when daemon mode or writes PID of master process into file <pidfile> when master-worker mode. This option is equivalent to the "-p" command line argument. The file must be accessible to the user starting the process. See also "[daemon](#daemon)" and "[master-worker](#master-worker)". ``` **pp2-never-send-local** ``` A bug in the PROXY protocol v2 implementation was present in HAProxy up to version 2.1, causing it to emit a PROXY command instead of a LOCAL command for health checks. This is particularly minor but confuses some servers' logs. Sadly, the bug was discovered very late and revealed that some servers which possibly only tested their PROXY protocol implementation against HAProxy fail to properly handle the LOCAL command, and permanently remain in the "down" state when HAProxy checks them. When this happens, it is possible to enable this global option to revert to the older (bogus) behavior for the time it takes to contact the affected components' vendors and get them fixed. This option is disabled by default and acts on all servers having the "[send-proxy-v2](#send-proxy-v2)" statement. ``` **presetenv** <name> <value> ``` Sets environment variable <name> to value <value>. If the variable exists, it is NOT overwritten. The changes immediately take effect so that the next line in the configuration file sees the new value. See also "[setenv](#setenv)", "[resetenv](#resetenv)", and "[unsetenv](#unsetenv)". ``` **resetenv** [<name> ...] ``` Removes all environment variables except the ones specified in argument. It allows to use a clean controlled environment before setting new values with setenv or unsetenv. Please note that some internal functions may make use of some environment variables, such as time manipulation functions, but also OpenSSL or even external checks. This must be used with extreme care and only after complete validation. The changes immediately take effect so that the next line in the configuration file sees the new environment. See also "[setenv](#setenv)", "[presetenv](#presetenv)", and "[unsetenv](#unsetenv)". ``` **server-state-base** <directory> ``` Specifies the directory prefix to be prepended in front of all servers state file names which do not start with a '/'. See also "[server-state-file](#server-state-file)", "[load-server-state-from-file](#load-server-state-from-file)" and "[server-state-file-name](#server-state-file-name)". ``` **server-state-file** <file> ``` Specifies the path to the file containing state of servers. If the path starts with a slash ('/'), it is considered absolute, otherwise it is considered relative to the directory specified using "[server-state-base](#server-state-base)" (if set) or to the current directory. Before reloading HAProxy, it is possible to save the servers' current state using the stats command "show servers state". The output of this command must be written in the file pointed by <file>. When starting up, before handling traffic, HAProxy will read, load and apply state for each server found in the file and available in its current running configuration. See also "[server-state-base](#server-state-base)" and "show servers state", "[load-server-state-from-file](#load-server-state-from-file)" and "[server-state-file-name](#server-state-file-name)" ``` **set-dumpable** ``` This option is better left disabled by default and enabled only upon a developer's request. If it has been enabled, it may still be forcibly disabled by prefixing it with the "no" keyword. It has no impact on performance nor stability but will try hard to re-enable core dumps that were possibly disabled by file size limitations (ulimit -f), core size limitations (ulimit -c), or "dumpability" of a process after changing its UID/GID (such as /proc/sys/fs/suid_dumpable on Linux). Core dumps might still be limited by the current directory's permissions (check what directory the file is started from), the chroot directory's permission (it may be needed to temporarily disable the chroot directive or to move it to a dedicated writable location), or any other system-specific constraint. For example, some Linux flavours are notorious for replacing the default core file with a path to an executable not even installed on the system (check /proc/sys/kernel/core_pattern). Often, simply writing "core", "core.%p" or "/var/log/core/core.%p" addresses the issue. When trying to enable this option waiting for a rare issue to re-appear, it's often a good idea to first try to obtain such a dump by issuing, for example, "kill -11" to the "haproxy" process and verify that it leaves a core where expected when dying. ``` **set-var** <var-name> <expr> ``` Sets the process-wide variable '<var-name>' to the result of the evaluation of the sample expression <expr>. The variable '<var-name>' may only be a process-wide variable (using the 'proc.' prefix). It works exactly like the 'set-var' action in TCP or HTTP rules except that the expression is evaluated at configuration parsing time and that the variable is instantly set. The sample fetch functions and converters permitted in the expression are only those using internal data, typically 'int(value)' or 'str(value)'. It is possible to reference previously allocated variables as well. These variables will then be readable (and modifiable) from the regular rule sets. ``` Example: ``` global set-var proc.current_state str(primary) set-var proc.prio int(100) set-var proc.threshold int(200),sub(proc.prio) ``` **set-var-fmt** <var-name> <fmt> ``` Sets the process-wide variable '<var-name>' to the string resulting from the evaluation of the log-format <fmt>. The variable '<var-name>' may only be a process-wide variable (using the 'proc.' prefix). It works exactly like the 'set-var-fmt' action in TCP or HTTP rules except that the expression is evaluated at configuration parsing time and that the variable is instantly set. The sample fetch functions and converters permitted in the expression are only those using internal data, typically 'int(value)' or 'str(value)'. It is possible to reference previously allocated variables as well. These variables will then be readable (and modifiable) from the regular rule sets. Please see [section 8.2.4](#8.2.4) for details on the log-format syntax. ``` Example: ``` global set-var-fmt proc.current_state "primary" set-var-fmt proc.bootid "%pid|%t" ``` **setenv** <name> <value> ``` Sets environment variable <name> to value <value>. If the variable exists, it is overwritten. The changes immediately take effect so that the next line in the configuration file sees the new value. See also "[presetenv](#presetenv)", "[resetenv](#resetenv)", and "[unsetenv](#unsetenv)". ``` **ssl-default-bind-ciphers** <ciphers> ``` This setting is only available when support for OpenSSL was built in. It sets the default string describing the list of cipher algorithms ("cipher suite") that are negotiated during the SSL/TLS handshake up to TLSv1.2 for all "bind" lines which do not explicitly define theirs. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages. For background information and recommendations see e.g. (https://wiki.mozilla.org/Security/Server_Side_TLS) and (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3 cipher configuration, please check the "[ssl-default-bind-ciphersuites](#ssl-default-bind-ciphersuites)" keyword. Please check the "bind" keyword for more information. ``` **ssl-default-bind-ciphersuites** <ciphersuites> ``` This setting is only available when support for OpenSSL was built in and OpenSSL 1.1.1 or later was used to build HAProxy. It sets the default string describing the list of cipher algorithms ("cipher suite") that are negotiated during the TLSv1.3 handshake for all "bind" lines which do not explicitly define theirs. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages under the section "ciphersuites". For cipher configuration for TLSv1.2 and earlier, please check the "[ssl-default-bind-ciphers](#ssl-default-bind-ciphers)" keyword. Please check the "bind" keyword for more information. ``` **ssl-default-bind-curves** <curves> ``` This setting is only available when support for OpenSSL was built in. It sets the default string describing the list of elliptic curves algorithms ("curve suite") that are negotiated during the SSL/TLS handshake with ECDHE. The format of the string is a colon-delimited list of curve name. Please check the "bind" keyword for more information. ``` **ssl-default-bind-options** [<option>]... ``` This setting is only available when support for OpenSSL was built in. It sets default ssl-options to force on all "bind" lines. Please check the "bind" keyword to see available options. ``` Example: ``` global ssl-default-bind-options ssl-min-ver TLSv1.0 no-tls-tickets ``` **ssl-default-server-ciphers** <ciphers> ``` This setting is only available when support for OpenSSL was built in. It sets the default string describing the list of cipher algorithms that are negotiated during the SSL/TLS handshake up to TLSv1.2 with the server, for all "server" lines which do not explicitly define theirs. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages. For background information and recommendations see e.g. (https://wiki.mozilla.org/Security/Server_Side_TLS) and (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3 cipher configuration, please check the "[ssl-default-server-ciphersuites](#ssl-default-server-ciphersuites)" keyword. Please check the "server" keyword for more information. ``` **ssl-default-server-ciphersuites** <ciphersuites> ``` This setting is only available when support for OpenSSL was built in and OpenSSL 1.1.1 or later was used to build HAProxy. It sets the default string describing the list of cipher algorithms that are negotiated during the TLSv1.3 handshake with the server, for all "server" lines which do not explicitly define theirs. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages under the section "ciphersuites". For cipher configuration for TLSv1.2 and earlier, please check the "[ssl-default-server-ciphers](#ssl-default-server-ciphers)" keyword. Please check the "server" keyword for more information. ``` **ssl-default-server-options** [<option>]... ``` This setting is only available when support for OpenSSL was built in. It sets default ssl-options to force on all "server" lines. Please check the "server" keyword to see available options. ``` **ssl-dh-param-file** <file> ``` This setting is only available when support for OpenSSL was built in. It sets the default DH parameters that are used during the SSL/TLS handshake when ephemeral Diffie-Hellman (DHE) key exchange is used, for all "bind" lines which do not explicitly define theirs. It will be overridden by custom DH parameters found in a bind certificate file if any. If custom DH parameters are not specified either by using ssl-dh-param-file or by setting them directly in the certificate file, DHE ciphers will not be used, unless tune.ssl.default-dh-param is set. In this latter case, pre-defined DH parameters of the specified size will be used. Custom parameters are known to be more secure and therefore their use is recommended. Custom DH parameters may be generated by using the OpenSSL command "openssl dhparam <size>", where size should be at least 2048, as 1024-bit DH parameters should not be considered secure anymore. ``` **ssl-propquery** <query> ``` This setting is only available when support for OpenSSL was built in and when OpenSSL's version is at least 3.0. It allows to define a default property string used when fetching algorithms in providers. It behave the same way as the openssl propquery option and it follows the same syntax (described in https://www.openssl.org/docs/man3.0/man7/property.html). For instance, if you have two providers loaded, the foo one and the default one, the propquery "?provider=foo" allows to pick the algorithm implementations provided by the foo provider by default, and to fallback on the default provider's one if it was not found. ``` **ssl-provider** <name> ``` This setting is only available when support for OpenSSL was built in and when OpenSSL's version is at least 3.0. It allows to load a provider during init. If loading is successful, any capabilities provided by the loaded provider might be used by HAProxy. Multiple 'ssl-provider' options can be specified in a configuration file. The providers will be loaded in their order of appearance. Please note that loading a provider explicitly prevents OpenSSL from loading the 'default' provider automatically. OpenSSL also allows to define the providers that should be loaded directly in its configuration file (openssl.cnf for instance) so it is not necessary to use this 'ssl-provider' option to load providers. The "show ssl providers" CLI command can be used to show all the providers that were successfully loaded. The default search path of OpenSSL provider can be found in the output of the "openssl version -a" command. If the provider is in another directory, you can set the OPENSSL_MODULES environment variable, which takes the directory where your provider can be found. See also "[ssl-propquery](#ssl-propquery)" and "[ssl-provider-path](#ssl-provider-path)". ``` **ssl-provider-path** <path> ``` This setting is only available when support for OpenSSL was built in and when OpenSSL's version is at least 3.0. It allows to specify the search path that is to be used by OpenSSL for looking for providers. It behaves the same way as the OPENSSL_MODULES environment variable. It will be used for any following 'ssl-provider' option or until a new 'ssl-provider-path' is defined. See also "[ssl-provider](#ssl-provider)". ``` **ssl-load-extra-del-ext** ``` This setting allows to configure the way HAProxy does the lookup for the extra SSL files. By default HAProxy adds a new extension to the filename. (ex: with "foobar.crt" load "foobar.crt.key"). With this option enabled, HAProxy removes the extension before adding the new one (ex: with "foobar.crt" load "foobar.key"). Your crt file must have a ".crt" extension for this option to work. This option is not compatible with bundle extensions (.ecdsa, .rsa. .dsa) and won't try to remove them. This option is disabled by default. See also "[ssl-load-extra-files](#ssl-load-extra-files)". ``` **ssl-load-extra-files** <none|all|bundle|sctl|ocsp|issuer|key>\* ``` This setting alters the way HAProxy will look for unspecified files during the loading of the SSL certificates. This option applies to certificates associated to "bind" lines as well as "server" lines but some of the extra files will not have any functional impact for "server" line certificates. By default, HAProxy discovers automatically a lot of files not specified in the configuration, and you may want to disable this behavior if you want to optimize the startup time. "none": Only load the files specified in the configuration. Don't try to load a certificate bundle if the file does not exist. In the case of a directory, it won't try to bundle the certificates if they have the same basename. "all": This is the default behavior, it will try to load everything, bundles, sctl, ocsp, issuer, key. "bundle": When a file specified in the configuration does not exist, HAProxy will try to load a "cert bundle". Certificate bundles are only managed on the frontend side and will not work for backend certificates. Starting from HAProxy 2.3, the bundles are not loaded in the same OpenSSL certificate store, instead it will loads each certificate in a separate store which is equivalent to declaring multiple "crt". OpenSSL 1.1.1 is required to achieve this. Which means that bundles are now used only for backward compatibility and are not mandatory anymore to do an hybrid RSA/ECC bind configuration. To associate these PEM files into a "cert bundle" that is recognized by HAProxy, they must be named in the following way: All PEM files that are to be bundled must have the same base name, with a suffix indicating the key type. Currently, three suffixes are supported: rsa, dsa and ecdsa. For example, if www.example.com has two PEM files, an RSA file and an ECDSA file, they must be named: "example.pem.rsa" and "example.pem.ecdsa". The first part of the filename is arbitrary; only the suffix matters. To load this bundle into HAProxy, specify the base name only: ``` Example : ``` bind :8443 ssl crt example.pem ``` ``` Note that the suffix is not given to HAProxy; this tells HAProxy to look for a cert bundle. HAProxy will load all PEM files in the bundle as if they were configured separately in several "crt". The bundle loading does not have an impact anymore on the directory loading since files are loading separately. On the CLI, bundles are seen as separate files, and the bundle extension is required to commit them. OCSP files (.ocsp), issuer files (.issuer), Certificate Transparency (.sctl) as well as private keys (.key) are supported with multi-cert bundling. "sctl": Try to load "<basename>.sctl" for each crt keyword. If provided for a backend certificate, it will be loaded but will not have any functional impact. "ocsp": Try to load "<basename>.ocsp" for each crt keyword. If provided for a backend certificate, it will be loaded but will not have any functional impact. "issuer": Try to load "<basename>.issuer" if the issuer of the OCSP file is not provided in the PEM file. If provided for a backend certificate, it will be loaded but will not have any functional impact. "key": If the private key was not provided by the PEM file, try to load a file "<basename>.key" containing a private key. The default behavior is "all". ``` Example: ``` ssl-load-extra-files bundle sctl ssl-load-extra-files sctl ocsp issuer ssl-load-extra-files none ``` **See also:** "crt", [section 5.1](#5.1) about bind options and [section 5.2](#5.2) about server options. **ssl-server-verify** [none|required] ``` The default behavior for SSL verify on servers side. If specified to 'none', servers certificates are not verified. The default is 'required' except if forced using cmdline option '-dV'. ``` **ssl-skip-self-issued-ca** ``` Self issued CA, aka x509 root CA, is the anchor for chain validation: as a server is useless to send it, client must have it. Standard configuration need to not include such CA in PEM file. This option allows you to keep such CA in PEM file without sending it to the client. Use case is to provide issuer for ocsp without the need for '.issuer' file and be able to share it with 'issuers-chain-path'. This concerns all certificates without intermediate certificates. It's useless for BoringSSL, .issuer is ignored because ocsp bits does not need it. Requires at least OpenSSL 1.0.2. ``` **stats maxconn** <connections> ``` By default, the stats socket is limited to 10 concurrent connections. It is possible to change this value with "[stats maxconn](#stats%20maxconn)". ``` **stats socket** [<address:port>|<path>] [param\*] ``` Binds a UNIX socket to <path> or a TCPv4/v6 address to <address:port>. Connections to this socket will return various statistics outputs and even allow some commands to be issued to change some runtime settings. Please consult [section 9.3](#9.3) "Unix Socket commands" of Management Guide for more details. All parameters supported by "bind" lines are supported, for instance to restrict access to some users or their access rights. Please consult [section 5.1](#5.1) for more information. ``` **stats timeout** <timeout, in milliseconds> ``` The default timeout on the stats socket is set to 10 seconds. It is possible to change this value with "[stats timeout](#stats%20timeout)". The value must be passed in milliseconds, or be suffixed by a time unit among { us, ms, s, m, h, d }. ``` **strict-limits** ``` Makes process fail at startup when a setrlimit fails. HAProxy tries to set the best setrlimit according to what has been calculated. If it fails, it will emit a warning. This option is here to guarantee an explicit failure of HAProxy when those limits fail. It is enabled by default. It may still be forcibly disabled by prefixing it with the "no" keyword. ``` **thread-group** <group> [<thread-range>...] ``` This setting is only available when support for threads was built in. It enumerates the list of threads that will compose thread group <group>. Thread numbers and group numbers start at 1. Thread ranges are defined either using a single thread number at once, or by specifying the lower and upper bounds delimited by a dash '-' (e.g. "1-16"). Unassigned threads will be automatically assigned to unassigned thread groups, and thread groups defined with this directive will never receive more threads than those defined. Defining the same group multiple times overrides previous definitions with the new one. See also "[nbthread](#nbthread)" and "[thread-groups](#thread-groups)". ``` **thread-groups** <number> ``` This setting is only available when support for threads was built in. It makes HAProxy split its threads into <number> independent groups. At the moment, the default value is 1. Thread groups make it possible to reduce sharing between threads to limit contention, at the expense of some extra configuration efforts. It is also the only way to use more than 64 threads since up to 64 threads per group may be configured. The maximum number of groups is configured at compile time and defaults to 16. See also "[nbthread](#nbthread)". ``` **trace** <args...> ``` This command configures one "[trace](#trace)" subsystem statement. Each of them can be found in the management manual, and follow the exact same syntax. Only one statement per line is permitted (i.e. if some long trace configurations using semi-colons are to be imported, they must be placed one per line). Any output that the "[trace](#trace)" command would produce will be emitted during the parsing step of the section. Most of the time these will be errors and warnings, but certain incomplete commands might list permissible choices. This command is not meant for regular use, it will generally only be suggested by developers along complex debugging sessions. For this reason it is internally marked as experimental, meaning that "[expose-experimental-directives](#expose-experimental-directives)" must appear on a line before any "[trace](#trace)" statement. Note that these directives are parsed on the fly, so referencing a ring buffer that is only declared further will not work. For such use cases it is suggested to place another "global" section with only the "[trace](#trace)" statements after the declaration of that ring. It is important to keep in mind that depending on the trace level and details, enabling traces can severely degrade the global performance. Please refer to the management manual for the statements syntax. ``` **uid** <number> ``` Changes the process's user ID to <number>. It is recommended that the user ID is dedicated to HAProxy or to a small set of similar daemons. HAProxy must be started with superuser privileges in order to be able to switch to another one. See also "gid" and "user". ``` **ulimit-n** <number> ``` Sets the maximum number of per-process file-descriptors to <number>. By default, it is automatically computed, so it is recommended not to use this option. If the intent is only to limit the number of file descriptors, better use "[fd-hard-limit](#fd-hard-limit)" instead. Note that the dynamic servers are not taken into account in this automatic resource calculation. If using a large number of them, it may be needed to manually specify this value. ``` **See also:** fd-hard-limit, maxconn **unix-bind** [ prefix <prefix> ] [ mode <mode> ] [ user <user> ] [ uid <uid> ] [ group <group> ] [ gid <gid> ] ``` Fixes common settings to UNIX listening sockets declared in "bind" statements. This is mainly used to simplify declaration of those UNIX sockets and reduce the risk of errors, since those settings are most commonly required but are also process-specific. The <prefix> setting can be used to force all socket path to be relative to that directory. This might be needed to access another component's chroot. Note that those paths are resolved before HAProxy chroots itself, so they are absolute. The <mode>, <user>, <uid>, <group> and <gid> all have the same meaning as their homonyms used by the "bind" statement. If both are specified, the "bind" statement has priority, meaning that the "[unix-bind](#unix-bind)" settings may be seen as process-wide default settings. ``` **unsetenv** [<name> ...] ``` Removes environment variables specified in arguments. This can be useful to hide some sensitive information that are occasionally inherited from the user's environment during some operations. Variables which did not exist are silently ignored so that after the operation, it is certain that none of these variables remain. The changes immediately take effect so that the next line in the configuration file will not see these variables. See also "[setenv](#setenv)", "[presetenv](#presetenv)", and "[resetenv](#resetenv)". ``` **user** <user name> ``` Similar to "uid" but uses the UID of user name <user name> from /etc/passwd. See also "uid" and "group". ``` **node** <name> ``` Only letters, digits, hyphen and underscore are allowed, like in DNS names. This statement is useful in HA configurations where two or more processes or servers share the same IP address. By setting a different node-name on all nodes, it becomes easy to immediately spot what server is handling the traffic. ``` **wurfl-cache-size** <size> ``` Sets the WURFL Useragent cache size. For faster lookups, already processed user agents are kept in a LRU cache : - "0" : no cache is used. - <size> : size of lru cache in elements. Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. ``` **wurfl-data-file** <file path> ``` The path of the WURFL data file to provide device detection services. The file should be accessible by HAProxy with relevant permissions. Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. ``` **wurfl-information-list** [<capability>]\* ``` A space-delimited list of WURFL capabilities, virtual capabilities, property names we plan to use in injected headers. A full list of capability and virtual capability names is available on the Scientiamobile website : https://www.scientiamobile.com/wurflCapability Valid WURFL properties are: - wurfl_id Contains the device ID of the matched device. - wurfl_root_id Contains the device root ID of the matched device. - wurfl_isdevroot Tells if the matched device is a root device. Possible values are "TRUE" or "FALSE". - wurfl_useragent The original useragent coming with this particular web request. - wurfl_api_version Contains a string representing the currently used Libwurfl API version. - wurfl_info A string containing information on the parsed wurfl.xml and its full path. - wurfl_last_load_time Contains the UNIX timestamp of the last time WURFL has been loaded successfully. - wurfl_normalized_useragent The normalized useragent. Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. ``` **wurfl-information-list-separator** <char> ``` A char that will be used to separate values in a response header containing WURFL results. If not set that a comma (',') will be used by default. Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. ``` **wurfl-patch-file** [<file path>] ``` A list of WURFL patch file paths. Note that patches are loaded during startup thus before the chroot. Please note that this option is only available when HAProxy has been compiled with USE_WURFL=1. ``` ### 3.2. Performance tuning **busy-polling** ``` In some situations, especially when dealing with low latency on processors supporting a variable frequency or when running inside virtual machines, each time the process waits for an I/O using the poller, the processor goes back to sleep or is offered to another VM for a long time, and it causes excessively high latencies. This option provides a solution preventing the processor from sleeping by always using a null timeout on the pollers. This results in a significant latency reduction (30 to 100 microseconds observed) at the expense of a risk to overheat the processor. It may even be used with threads, in which case improperly bound threads may heavily conflict, resulting in a worse performance and high values for the CPU stolen fields in "show info" output, indicating which threads are misconfigured. It is important not to let the process run on the same processor as the network interrupts when this option is used. It is also better to avoid using it on multiple CPU threads sharing the same core. This option is disabled by default. If it has been enabled, it may still be forcibly disabled by prefixing it with the "no" keyword. It is ignored by the "select" and "poll" pollers. This option is automatically disabled on old processes in the context of seamless reload; it avoids too much cpu conflicts when multiple processes stay around for some time waiting for the end of their current connections. ``` **max-spread-checks** <delay in milliseconds> ``` By default, HAProxy tries to spread the start of health checks across the smallest health check interval of all the servers in a farm. The principle is to avoid hammering services running on the same server. But when using large check intervals (10 seconds or more), the last servers in the farm take some time before starting to be tested, which can be a problem. This parameter is used to enforce an upper bound on delay between the first and the last check, even if the servers' check intervals are larger. When servers run with shorter intervals, their intervals will be respected though. ``` **maxcompcpuusage** <number> ``` Sets the maximum CPU usage HAProxy can reach before stopping the compression for new requests or decreasing the compression level of current requests. It works like 'maxcomprate' but measures CPU usage instead of incoming data bandwidth. The value is expressed in percent of the CPU used by HAProxy. A value of 100 disable the limit. The default value is 100. Setting a lower value will prevent the compression work from slowing the whole process down and from introducing high latencies. ``` **maxcomprate** <number> ``` Sets the maximum per-process input compression rate to <number> kilobytes per second. For each session, if the maximum is reached, the compression level will be decreased during the session. If the maximum is reached at the beginning of a session, the session will not compress at all. If the maximum is not reached, the compression level will be increased up to tune.comp.maxlevel. A value of zero means there is no limit, this is the default value. ``` **maxconn** <number> ``` Sets the maximum per-process number of concurrent connections to <number>. It is equivalent to the command-line argument "-n". Proxies will stop accepting connections when this limit is reached. The "[ulimit-n](#ulimit-n)" parameter is automatically adjusted according to this value. See also "[ulimit-n](#ulimit-n)". Note: the "select" poller cannot reliably use more than 1024 file descriptors on some platforms. If your platform only supports select and reports "select FAILED" on startup, you need to reduce maxconn until it works (slightly below 500 in general). If this value is not set, it will automatically be calculated based on the current file descriptors limit reported by the "ulimit -n" command, possibly reduced to a lower value if a memory limit is enforced, based on the buffer size, memory allocated to compression, SSL cache size, and use or not of SSL and the associated maxsslconn (which can also be automatic). In any case, the fd-hard-limit applies if set. ``` **See also:** fd-hard-limit, ulimit-n **maxconnrate** <number> ``` Sets the maximum per-process number of connections per second to <number>. Proxies will stop accepting connections when this limit is reached. It can be used to limit the global capacity regardless of each frontend capacity. It is important to note that this can only be used as a service protection measure, as there will not necessarily be a fair share between frontends when the limit is reached, so it's a good idea to also limit each frontend to some value close to its expected share. Also, lowering tune.maxaccept can improve fairness. ``` **maxpipes** <number> ``` Sets the maximum per-process number of pipes to <number>. Currently, pipes are only used by kernel-based tcp splicing. Since a pipe contains two file descriptors, the "[ulimit-n](#ulimit-n)" value will be increased accordingly. The default value is maxconn/4, which seems to be more than enough for most heavy usages. The splice code dynamically allocates and releases pipes, and can fall back to standard copy, so setting this value too low may only impact performance. ``` **maxsessrate** <number> ``` Sets the maximum per-process number of sessions per second to <number>. Proxies will stop accepting connections when this limit is reached. It can be used to limit the global capacity regardless of each frontend capacity. It is important to note that this can only be used as a service protection measure, as there will not necessarily be a fair share between frontends when the limit is reached, so it's a good idea to also limit each frontend to some value close to its expected share. Also, lowering tune.maxaccept can improve fairness. ``` **maxsslconn** <number> ``` Sets the maximum per-process number of concurrent SSL connections to <number>. By default there is no SSL-specific limit, which means that the global maxconn setting will apply to all connections. Setting this limit avoids having openssl use too much memory and crash when malloc returns NULL (since it unfortunately does not reliably check for such conditions). Note that the limit applies both to incoming and outgoing connections, so one connection which is deciphered then ciphered accounts for 2 SSL connections. If this value is not set, but a memory limit is enforced, this value will be automatically computed based on the memory limit, maxconn, the buffer size, memory allocated to compression, SSL cache size, and use of SSL in either frontends, backends or both. If neither maxconn nor maxsslconn are specified when there is a memory limit, HAProxy will automatically adjust these values so that 100% of the connections can be made over SSL with no risk, and will consider the sides where it is enabled (frontend, backend, both). ``` **maxsslrate** <number> ``` Sets the maximum per-process number of SSL sessions per second to <number>. SSL listeners will stop accepting connections when this limit is reached. It can be used to limit the global SSL CPU usage regardless of each frontend capacity. It is important to note that this can only be used as a service protection measure, as there will not necessarily be a fair share between frontends when the limit is reached, so it's a good idea to also limit each frontend to some value close to its expected share. It is also important to note that the sessions are accounted before they enter the SSL stack and not after, which also protects the stack against bad handshakes. Also, lowering tune.maxaccept can improve fairness. ``` **maxzlibmem** <number> ``` Sets the maximum amount of RAM in megabytes per process usable by the zlib. When the maximum amount is reached, future sessions will not compress as long as RAM is unavailable. When sets to 0, there is no limit. The default value is 0. The value is available in bytes on the UNIX socket with "show info" on the line "MaxZlibMemUsage", the memory used by zlib is "ZlibMemUsage" in bytes. ``` **no-memory-trimming** ``` Disables memory trimming ("malloc_trim") at a few moments where attempts are made to reclaim lots of memory (on memory shortage or on reload). Trimming memory forces the system's allocator to scan all unused areas and to release them. This is generally seen as nice action to leave more available memory to a new process while the old one is unlikely to make significant use of it. But some systems dealing with tens to hundreds of thousands of concurrent connections may experience a lot of memory fragmentation, that may render this release operation extremely long. During this time, no more traffic passes through the process, new connections are not accepted anymore, some health checks may even fail, and the watchdog may even trigger and kill the unresponsive process, leaving a huge core dump. If this ever happens, then it is suggested to use this option to disable trimming and stop trying to be nice with the new process. Note that advanced memory allocators usually do not suffer from such a problem. ``` **noepoll** ``` Disables the use of the "epoll" event polling system on Linux. It is equivalent to the command-line argument "-de". The next polling system used will generally be "poll". See also "[nopoll](#nopoll)". ``` **noevports** ``` Disables the use of the event ports event polling system on SunOS systems derived from Solaris 10 and later. It is equivalent to the command-line argument "-dv". The next polling system used will generally be "poll". See also "[nopoll](#nopoll)". ``` **nogetaddrinfo** ``` Disables the use of getaddrinfo(3) for name resolving. It is equivalent to the command line argument "-dG". Deprecated gethostbyname(3) will be used. ``` **nokqueue** ``` Disables the use of the "kqueue" event polling system on BSD. It is equivalent to the command-line argument "-dk". The next polling system used will generally be "poll". See also "[nopoll](#nopoll)". ``` **nopoll** ``` Disables the use of the "poll" event polling system. It is equivalent to the command-line argument "-dp". The next polling system used will be "select". It should never be needed to disable "poll" since it's available on all platforms supported by HAProxy. See also "[nokqueue](#nokqueue)", "[noepoll](#noepoll)" and "[noevports](#noevports)". ``` **noreuseport** ``` Disables the use of SO_REUSEPORT - see socket(7). It is equivalent to the command line argument "-dR". ``` **nosplice** ``` Disables the use of kernel tcp splicing between sockets on Linux. It is equivalent to the command line argument "-dS". Data will then be copied using conventional and more portable recv/send calls. Kernel tcp splicing is limited to some very recent instances of kernel 2.6. Most versions between 2.6.25 and 2.6.28 are buggy and will forward corrupted data, so they must not be used. This option makes it easier to globally disable kernel splicing in case of doubt. See also "[option splice-auto](#option%20splice-auto)", "[option splice-request](#option%20splice-request)" and "[option splice-response](#option%20splice-response)". ``` **profiling.memory** { on | off } ``` Enables ('on') or disables ('off') per-function memory profiling. This will keep usage statistics of malloc/calloc/realloc/free calls anywhere in the process (including libraries) which will be reported on the CLI using the "show profiling" command. This is essentially meant to be used when an abnormal memory usage is observed that cannot be explained by the pools and other info are required. The performance hit will typically be around 1%, maybe a bit more on highly threaded machines, so it is normally suitable for use in production. The same may be achieved at run time on the CLI using the "set profiling memory" command, please consult the management manual. ``` **profiling.tasks** { auto | on | off } ``` Enables ('on') or disables ('off') per-task CPU profiling. When set to 'auto' the profiling automatically turns on a thread when it starts to suffer from an average latency of 1000 microseconds or higher as reported in the "avg_loop_us" activity field, and automatically turns off when the latency returns below 990 microseconds (this value is an average over the last 1024 loops so it does not vary quickly and tends to significantly smooth short spikes). It may also spontaneously trigger from time to time on overloaded systems, containers, or virtual machines, or when the system swaps (which must absolutely never happen on a load balancer). CPU profiling per task can be very convenient to report where the time is spent and which requests have what effect on which other request. Enabling it will typically affect the overall's performance by less than 1%, thus it is recommended to leave it to the default 'auto' value so that it only operates when a problem is identified. This feature requires a system supporting the clock_gettime(2) syscall with clock identifiers CLOCK_MONOTONIC and CLOCK_THREAD_CPUTIME_ID, otherwise the reported time will be zero. This option may be changed at run time using "set profiling" on the CLI. ``` **spread-checks** <0..50, in percent> ``` Sometimes it is desirable to avoid sending agent and health checks to servers at exact intervals, for instance when many logical servers are located on the same physical server. With the help of this parameter, it becomes possible to add some randomness in the check interval between 0 and +/- 50%. A value between 2 and 5 seems to show good results. The default value remains at 0. ``` **ssl-engine** <name> [algo <comma-separated list of algorithms>] ``` Sets the OpenSSL engine to <name>. List of valid values for <name> may be obtained using the command "openssl engine". This statement may be used multiple times, it will simply enable multiple crypto engines. Referencing an unsupported engine will prevent HAProxy from starting. Note that many engines will lead to lower HTTPS performance than pure software with recent processors. The optional command "algo" sets the default algorithms an ENGINE will supply using the OPENSSL function ENGINE_set_default_string(). A value of "ALL" uses the engine for all cryptographic operations. If no list of algo is specified then the value of "ALL" is used. A comma-separated list of different algorithms may be specified, including: RSA, DSA, DH, EC, RAND, CIPHERS, DIGESTS, PKEY, PKEY_CRYPTO, PKEY_ASN1. This is the same format that openssl configuration file uses: https://www.openssl.org/docs/man1.0.2/apps/config.html ``` **ssl-mode-async** ``` Adds SSL_MODE_ASYNC mode to the SSL context. This enables asynchronous TLS I/O operations if asynchronous capable SSL engines are used. The current implementation supports a maximum of 32 engines. The Openssl ASYNC API doesn't support moving read/write buffers and is not compliant with HAProxy's buffer management. So the asynchronous mode is disabled on read/write operations (it is only enabled during initial and renegotiation handshakes). ``` **tune.buffers.limit** <number> ``` Sets a hard limit on the number of buffers which may be allocated per process. The default value is zero which means unlimited. The minimum non-zero value will always be greater than "[tune.buffers.reserve](#tune.buffers.reserve)" and should ideally always be about twice as large. Forcing this value can be particularly useful to limit the amount of memory a process may take, while retaining a sane behavior. When this limit is reached, sessions which need a buffer wait for another one to be released by another session. Since buffers are dynamically allocated and released, the waiting time is very short and not perceptible provided that limits remain reasonable. In fact sometimes reducing the limit may even increase performance by increasing the CPU cache's efficiency. Tests have shown good results on average HTTP traffic with a limit to 1/10 of the expected global maxconn setting, which also significantly reduces memory usage. The memory savings come from the fact that a number of connections will not allocate 2*tune.bufsize. It is best not to touch this value unless advised to do so by an HAProxy core developer. ``` **tune.buffers.reserve** <number> ``` Sets the number of buffers which are pre-allocated and reserved for use only during memory shortage conditions resulting in failed memory allocations. The minimum value is 2 and is also the default. There is no reason a user would want to change this value, it's mostly aimed at HAProxy core developers. ``` **tune.bufsize** <number> ``` Sets the buffer size to this size (in bytes). Lower values allow more sessions to coexist in the same amount of RAM, and higher values allow some applications with very large cookies to work. The default value is 16384 and can be changed at build time. It is strongly recommended not to change this from the default value, as very low values will break some services such as statistics, and values larger than default size will increase memory usage, possibly causing the system to run out of memory. At least the global maxconn parameter should be decreased by the same factor as this one is increased. In addition, use of HTTP/2 mandates that this value must be 16384 or more. If an HTTP request is larger than (tune.bufsize - tune.maxrewrite), HAProxy will return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger than this size, HAProxy will return HTTP 502 (Bad Gateway). Note that the value set using this parameter will automatically be rounded up to the next multiple of 8 on 32-bit machines and 16 on 64-bit machines. ``` **tune.comp.maxlevel** <number> ``` Sets the maximum compression level. The compression level affects CPU usage during compression. This value affects CPU usage during compression. Each session using compression initializes the compression algorithm with this value. The default value is 1. ``` **tune.fail-alloc** ``` If compiled with DEBUG_FAIL_ALLOC or started with "-dMfail", gives the percentage of chances an allocation attempt fails. Must be between 0 (no failure) and 100 (no success). This is useful to debug and make sure memory failures are handled gracefully. ``` **tune.fd.edge-triggered** { on | off } [ EXPERIMENTAL ] ``` Enables ('on') or disables ('off') the edge-triggered polling mode for FDs that support it. This is currently only support with epoll. It may noticeably reduce the number of epoll_ctl() calls and slightly improve performance in certain scenarios. This is still experimental, it may result in frozen connections if bugs are still present, and is disabled by default. ``` **tune.h2.header-table-size** <number> ``` Sets the HTTP/2 dynamic header table size. It defaults to 4096 bytes and cannot be larger than 65536 bytes. A larger value may help certain clients send more compact requests, depending on their capabilities. This amount of memory is consumed for each HTTP/2 connection. It is recommended not to change it. ``` **tune.h2.initial-window-size** <number> ``` Sets the HTTP/2 initial window size, which is the number of bytes the client can upload before waiting for an acknowledgment from HAProxy. This setting only affects payload contents (i.e. the body of POST requests), not headers. The default value is 65536, which roughly allows up to 5 Mbps of upload bandwidth per client over a network showing a 100 ms ping time, or 500 Mbps over a 1-ms local network. It can make sense to increase this value to allow faster uploads, or to reduce it to increase fairness when dealing with many clients. It doesn't affect resource usage. ``` **tune.h2.max-concurrent-streams** <number> ``` Sets the HTTP/2 maximum number of concurrent streams per connection (ie the number of outstanding requests on a single connection). The default value is 100. A larger one may slightly improve page load time for complex sites when visited over high latency networks, but increases the amount of resources a single client may allocate. A value of zero disables the limit so a single client may create as many streams as allocatable by HAProxy. It is highly recommended not to change this value. ``` **tune.h2.max-frame-size** <number> ``` Sets the HTTP/2 maximum frame size that HAProxy announces it is willing to receive to its peers. The default value is the largest between 16384 and the buffer size (tune.bufsize). In any case, HAProxy will not announce support for frame sizes larger than buffers. The main purpose of this setting is to allow to limit the maximum frame size setting when using large buffers. Too large frame sizes might have performance impact or cause some peers to misbehave. It is highly recommended not to change this value. ``` **tune.http.cookielen** <number> ``` Sets the maximum length of captured cookies. This is the maximum value that the "capture cookie xxx len yyy" will be allowed to take, and any upper value will automatically be truncated to this one. It is important not to set too high a value because all cookie captures still allocate this size whatever their configured value (they share a same pool). This value is per request per response, so the memory allocated is twice this value per connection. When not specified, the limit is set to 63 characters. It is recommended not to change this value. ``` **tune.http.logurilen** <number> ``` Sets the maximum length of request URI in logs. This prevents truncating long request URIs with valuable query strings in log lines. This is not related to syslog limits. If you increase this limit, you may also increase the 'log ... len yyy' parameter. Your syslog daemon may also need specific configuration directives too. The default value is 1024. ``` **tune.http.maxhdr** <number> ``` Sets the maximum number of headers in a request. When a request comes with a number of headers greater than this value (including the first line), it is rejected with a "400 Bad Request" status code. Similarly, too large responses are blocked with "502 Bad Gateway". The default value is 101, which is enough for all usages, considering that the widely deployed Apache server uses the same limit. It can be useful to push this limit further to temporarily allow a buggy application to work by the time it gets fixed. The accepted range is 1..32767. Keep in mind that each new header consumes 32bits of memory for each session, so don't push this limit too high. ``` **tune.idle-pool.shared** { on | off } ``` Enables ('on') or disables ('off') sharing of idle connection pools between threads for a same server. The default is to share them between threads in order to minimize the number of persistent connections to a server, and to optimize the connection reuse rate. But to help with debugging or when suspecting a bug in HAProxy around connection reuse, it can be convenient to forcefully disable this idle pool sharing between multiple threads, and force this option to "off". The default is on. It is strongly recommended against disabling this option without setting a conservative value on "[pool-low-conn](#pool-low-conn)" for all servers relying on connection reuse to achieve a high performance level, otherwise connections might be closed very often as the thread count increases. ``` **tune.idletimer** <timeout> ``` Sets the duration after which HAProxy will consider that an empty buffer is probably associated with an idle stream. This is used to optimally adjust some packet sizes while forwarding large and small data alternatively. The decision to use splice() or to send large buffers in SSL is modulated by this parameter. The value is in milliseconds between 0 and 65535. A value of zero means that HAProxy will not try to detect idle streams. The default is 1000, which seems to correctly detect end user pauses (e.g. read a page before clicking). There should be no reason for changing this value. Please check tune.ssl.maxrecord below. ``` **tune.listener.multi-queue** { on | off } ``` Enables ('on') or disables ('off') the listener's multi-queue accept which spreads the incoming traffic to all threads a "bind" line is allowed to run on instead of taking them for itself. This provides a smoother traffic distribution and scales much better, especially in environments where threads may be unevenly loaded due to external activity (network interrupts colliding with one thread for example). This option is enabled by default, but it may be forcefully disabled for troubleshooting or for situations where it is estimated that the operating system already provides a good enough distribution and connections are extremely short-lived. ``` **tune.lua.forced-yield** <number> ``` This directive forces the Lua engine to execute a yield each <number> of instructions executed. This permits interrupting a long script and allows the HAProxy scheduler to process other tasks like accepting connections or forwarding traffic. The default value is 10000 instructions. If HAProxy often executes some Lua code but more responsiveness is required, this value can be lowered. If the Lua code is quite long and its result is absolutely required to process the data, the <number> can be increased. ``` **tune.lua.maxmem** ``` Sets the maximum amount of RAM in megabytes per process usable by Lua. By default it is zero which means unlimited. It is important to set a limit to ensure that a bug in a script will not result in the system running out of memory. ``` **tune.lua.session-timeout** <timeout> ``` This is the execution timeout for the Lua sessions. This is useful for preventing infinite loops or spending too much time in Lua. This timeout counts only the pure Lua runtime. If the Lua does a sleep, the sleep is not taken in account. The default timeout is 4s. ``` **tune.lua.service-timeout** <timeout> ``` This is the execution timeout for the Lua services. This is useful for preventing infinite loops or spending too much time in Lua. This timeout counts only the pure Lua runtime. If the Lua does a sleep, the sleep is not taken in account. The default timeout is 4s. ``` **tune.lua.task-timeout** <timeout> ``` Purpose is the same as "[tune.lua.session-timeout](#tune.lua.session-timeout)", but this timeout is dedicated to the tasks. By default, this timeout isn't set because a task may remain alive during of the lifetime of HAProxy. For example, a task used to check servers. ``` **tune.maxaccept** <number> ``` Sets the maximum number of consecutive connections a process may accept in a row before switching to other work. In single process mode, higher numbers used to give better performance at high connection rates, though this is not the case anymore with the multi-queue. This value applies individually to each listener, so that the number of processes a listener is bound to is taken into account. This value defaults to 4 which showed best results. If a significantly higher value was inherited from an ancient config, it might be worth removing it as it will both increase performance and lower response time. In multi-process mode, it is divided by twice the number of processes the listener is bound to. Setting this value to -1 completely disables the limitation. It should normally not be needed to tweak this value. ``` **tune.maxpollevents** <number> ``` Sets the maximum amount of events that can be processed at once in a call to the polling system. The default value is adapted to the operating system. It has been noticed that reducing it below 200 tends to slightly decrease latency at the expense of network bandwidth, and increasing it above 200 tends to trade latency for slightly increased bandwidth. ``` **tune.maxrewrite** <number> ``` Sets the reserved buffer space to this size in bytes. The reserved space is used for header rewriting or appending. The first reads on sockets will never fill more than bufsize-maxrewrite. Historically it has defaulted to half of bufsize, though that does not make much sense since there are rarely large numbers of headers to add. Setting it too high prevents processing of large requests or responses. Setting it too low prevents addition of new headers to already large requests or to POST requests. It is generally wise to set it to about 1024. It is automatically readjusted to half of bufsize if it is larger than that. This means you don't have to worry about it when changing bufsize. ``` **tune.pattern.cache-size** <number> ``` Sets the size of the pattern lookup cache to <number> entries. This is an LRU cache which reminds previous lookups and their results. It is used by ACLs and maps on slow pattern lookups, namely the ones using the "[sub](#sub)", "reg", "dir", "dom", "end", "[bin](#bin)" match methods as well as the case-insensitive strings. It applies to pattern expressions which means that it will be able to memorize the result of a lookup among all the patterns specified on a configuration line (including all those loaded from files). It automatically invalidates entries which are updated using HTTP actions or on the CLI. The default cache size is set to 10000 entries, which limits its footprint to about 5 MB per process/thread on 32-bit systems and 8 MB per process/thread on 64-bit systems, as caches are thread/process local. There is a very low risk of collision in this cache, which is in the order of the size of the cache divided by 2^64. Typically, at 10000 requests per second with the default cache size of 10000 entries, there's 1% chance that a brute force attack could cause a single collision after 60 years, or 0.1% after 6 years. This is considered much lower than the risk of a memory corruption caused by aging components. If this is not acceptable, the cache can be disabled by setting this parameter to 0. ``` **tune.peers.max-updates-at-once** <number> ``` Sets the maximum number of stick-table updates that haproxy will try to process at once when sending messages. Retrieving the data for these updates requires some locking operations which can be CPU intensive on highly threaded machines if unbound, and may also increase the traffic latency during the initial batched transfer between an older and a newer process. Conversely low values may also incur higher CPU overhead, and take longer to complete. The default value is 200 and it is suggested not to change it. ``` **tune.pipesize** <number> ``` Sets the kernel pipe buffer size to this size (in bytes). By default, pipes are the default size for the system. But sometimes when using TCP splicing, it can improve performance to increase pipe sizes, especially if it is suspected that pipes are not filled and that many calls to splice() are performed. This has an impact on the kernel's memory footprint, so this must not be changed if impacts are not understood. ``` **tune.pool-high-fd-ratio** <number> ``` This setting sets the max number of file descriptors (in percentage) used by HAProxy globally against the maximum number of file descriptors HAProxy can use before we start killing idle connections when we can't reuse a connection and we have to create a new one. The default is 25 (one quarter of the file descriptor will mean that roughly half of the maximum front connections can keep an idle connection behind, anything beyond this probably doesn't make much sense in the general case when targeting connection reuse). ``` **tune.pool-low-fd-ratio** <number> ``` This setting sets the max number of file descriptors (in percentage) used by HAProxy globally against the maximum number of file descriptors HAProxy can use before we stop putting connection into the idle pool for reuse. The default is 20. ``` **tune.quic.frontend.conn-tx-buffers.limit** <number> ``` Warning: QUIC support in HAProxy is currently experimental. Configuration may change without deprecation in the future. This settings defines the maximum number of buffers allocated for a QUIC connection on data emission. By default, it is set to 30. QUIC buffers are drained on ACK reception. This setting has a direct impact on the throughput and memory consumption and can be adjusted according to an estimated round time-trip. Each buffer is tune.bufsize. ``` **tune.quic.frontend.max-idle-timeout** <timeout> ``` Warning: QUIC support in HAProxy is currently experimental. Configuration may change without deprecation in the future. Sets the QUIC max_idle_timeout transport parameters in milliseconds for frontends which determines the period of time after which a connection silently closes if it has remained inactive during an effective period of time deduced from the two max_idle_timeout values announced by the two endpoints: - the minimum of the two values if both are not null, - the maximum if only one of them is not null, - if both values are null, this feature is disabled. The default value is 30000. ``` **tune.quic.frontend.max-streams-bidi** <number> ``` Warning: QUIC support in HAProxy is currently experimental. Configuration may change without deprecation in the future. Sets the QUIC initial_max_streams_bidi transport parameter for frontends. This is the initial maximum number of bidirectional streams the remote peer will be authorized to open. This determines the number of concurrent client requests. The default value is 100. ``` **tune.quic.retry-threshold** <number> ``` Warning: QUIC support in HAProxy is currently experimental. Configuration may change without deprecation in the future. Dynamically enables the Retry feature for all the configured QUIC listeners as soon as this number of half open connections is reached. A half open connection is a connection whose handshake has not already successfully completed or failed. To be functional this setting needs a cluster secret to be set, if not it will be silently ignored (see "[cluster-secret](#cluster-secret)" setting). This setting will be also silently ignored if the use of QUIC Retry was forced (see "[quic-force-retry](#quic-force-retry)"). The default value is 100. See https://www.rfc-editor.org/rfc/rfc9000.html#section-8.1.2 for more information about QUIC retry. ``` **tune.quic.socket-owner** { listener | connection } ``` Warning: QUIC support in HAProxy is currently experimental. Configuration may change without deprecation in the future. Specifies how QUIC connections will use socket for receive/send operations. Connections can share listener socket or each connection can allocate its own socket. Default "listener" value indicates that QUIC transfers will occur on the shared listener socket. This option can be a good compromise for small traffic as it allows to reduce FD consumption. However, performance won't be optimal due to a higher CPU usage if listeners are shared accross a lot of threads or a large number of QUIC connections can be used simultaneously. If "connection" value is set, a dedicated socket will be allocated by every QUIC connections. This option is the preferred one to achieve the best performance with a large QUIC traffic. However, this relies on some advanced features from the UDP network stack. If your platform is deemed not compatible, haproxy will automatically revert to "listener" mode on startup. ``` **tune.rcvbuf.client** <number> **tune.rcvbuf.server** <number> ``` Forces the kernel socket receive buffer size on the client or the server side to the specified value in bytes. This value applies to all TCP/HTTP frontends and backends. It should normally never be set, and the default size (0) lets the kernel auto-tune this value depending on the amount of available memory. However it can sometimes help to set it to very low values (e.g. 4096) in order to save kernel memory by preventing it from buffering too large amounts of received data. Lower values will significantly increase CPU usage though. ``` **tune.recv\_enough** <number> ``` HAProxy uses some hints to detect that a short read indicates the end of the socket buffers. One of them is that a read returns more than <recv_enough> bytes, which defaults to 10136 (7 segments of 1448 each). This default value may be changed by this setting to better deal with workloads involving lots of short messages such as telnet or SSH sessions. ``` **tune.runqueue-depth** <number> ``` Sets the maximum amount of task that can be processed at once when running tasks. The default value depends on the number of threads but sits between 35 and 280, which tend to show the highest request rates and lowest latencies. Increasing it may incur latency when dealing with I/Os, making it too small can incur extra overhead. Higher thread counts benefit from lower values. When experimenting with much larger values, it may be useful to also enable tune.sched.low-latency and possibly tune.fd.edge-triggered to limit the maximum latency to the lowest possible. ``` **tune.sched.low-latency** { on | off } ``` Enables ('on') or disables ('off') the low-latency task scheduler. By default HAProxy processes tasks from several classes one class at a time as this is the most efficient. But when running with large values of tune.runqueue-depth this can have a measurable effect on request or connection latency. When this low-latency setting is enabled, tasks of lower priority classes will always be executed before other ones if they exist. This will permit to lower the maximum latency experienced by new requests or connections in the middle of massive traffic, at the expense of a higher impact on this large traffic. For regular usage it is better to leave this off. The default value is off. ``` **tune.sndbuf.client** <number> **tune.sndbuf.server** <number> ``` Forces the kernel socket send buffer size on the client or the server side to the specified value in bytes. This value applies to all TCP/HTTP frontends and backends. It should normally never be set, and the default size (0) lets the kernel auto-tune this value depending on the amount of available memory. However it can sometimes help to set it to very low values (e.g. 4096) in order to save kernel memory by preventing it from buffering too large amounts of received data. Lower values will significantly increase CPU usage though. Another use case is to prevent write timeouts with extremely slow clients due to the kernel waiting for a large part of the buffer to be read before notifying HAProxy again. ``` **tune.ssl.cachesize** <number> ``` Sets the size of the global SSL session cache, in a number of blocks. A block is large enough to contain an encoded session without peer certificate. An encoded session with peer certificate is stored in multiple blocks depending on the size of the peer certificate. A block uses approximately 200 bytes of memory (based on `sizeof(struct sh_ssl_sess_hdr) + SHSESS_BLOCK_MIN_SIZE` calculation used for `shctx_init` function). The default value may be forced at build time, otherwise defaults to 20000. When the cache is full, the most idle entries are purged and reassigned. Higher values reduce the occurrence of such a purge, hence the number of CPU-intensive SSL handshakes by ensuring that all users keep their session as long as possible. All entries are pre-allocated upon startup. Setting this value to 0 disables the SSL session cache. ``` **tune.ssl.capture-buffer-size** <number> **tune.ssl.capture-cipherlist-size** <number> (deprecated) ``` Sets the maximum size of the buffer used for capturing client hello cipher list, extensions list, elliptic curves list and elliptic curve point formats. If the value is 0 (default value) the capture is disabled, otherwise a buffer is allocated for each SSL/TLS connection. ``` **tune.ssl.default-dh-param** <number> ``` Sets the maximum size of the Diffie-Hellman parameters used for generating the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The final size will try to match the size of the server's RSA (or DSA) key (e.g, a 2048 bits temporary DH key for a 2048 bits RSA key), but will not exceed this maximum value. Only 1024 or higher values are allowed. Higher values will increase the CPU load, and values greater than 1024 bits are not supported by Java 7 and earlier clients. This value is not used if static Diffie-Hellman parameters are supplied either directly in the certificate file or by using the ssl-dh-param-file parameter. If there is neither a default-dh-param nor a ssl-dh-param-file defined, and if the server's PEM file of a given frontend does not specify its own DH parameters, then DHE ciphers will be unavailable for this frontend. ``` **tune.ssl.force-private-cache** ``` This option disables SSL session cache sharing between all processes. It should normally not be used since it will force many renegotiations due to clients hitting a random process. But it may be required on some operating systems where none of the SSL cache synchronization method may be used. In this case, adding a first layer of hash-based load balancing before the SSL layer might limit the impact of the lack of session sharing. ``` **tune.ssl.hard-maxrecord** <number> ``` Sets the maximum amount of bytes passed to SSL_write() at any time. Default value 0 means there is no limit. In contrast to tune.ssl.maxrecord this settings will not be adjusted dynamically. Smaller records may decrease throughput, but may be required when dealing with low-footprint clients. ``` **tune.ssl.keylog** { on | off } ``` This option activates the logging of the TLS keys. It should be used with care as it will consume more memory per SSL session and could decrease performances. This is disabled by default. These sample fetches should be used to generate the SSLKEYLOGFILE that is required to decipher traffic with wireshark. https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format The SSLKEYLOG is a series of lines which are formatted this way: <Label> <space> <ClientRandom> <space> <Secret> The ClientRandom is provided by the %[ssl_fc_client_random,hex] sample fetch, the secret and the Label could be find in the array below. You need to generate a SSLKEYLOGFILE with all the labels in this array. The following sample fetches are hexadecimal strings and does not need to be converted. SSLKEYLOGFILE Label | Sample fetches for the Secrets --------------------------------|----------------------------------------- CLIENT_EARLY_TRAFFIC_SECRET | %[ssl_fc_client_early_traffic_secret] CLIENT_HANDSHAKE_TRAFFIC_SECRET | %[ssl_fc_client_handshake_traffic_secret] SERVER_HANDSHAKE_TRAFFIC_SECRET | %[ssl_fc_server_handshake_traffic_secret] CLIENT_TRAFFIC_SECRET_0 | %[ssl_fc_client_traffic_secret_0] SERVER_TRAFFIC_SECRET_0 | %[ssl_fc_server_traffic_secret_0] EXPORTER_SECRET | %[ssl_fc_exporter_secret] EARLY_EXPORTER_SECRET | %[ssl_fc_early_exporter_secret] This is only available with OpenSSL 1.1.1, and useful with TLS1.3 session. If you want to generate the content of a SSLKEYLOGFILE with TLS < 1.3, you only need this line: "CLIENT_RANDOM %[ssl_fc_client_random,hex] %[ssl_fc_session_key,hex]" ``` **tune.ssl.lifetime** <timeout> ``` Sets how long a cached SSL session may remain valid. This time is expressed in seconds and defaults to 300 (5 min). It is important to understand that it does not guarantee that sessions will last that long, because if the cache is full, the longest idle sessions will be purged despite their configured lifetime. The real usefulness of this setting is to prevent sessions from being used for too long. ``` **tune.ssl.maxrecord** <number> ``` Sets the maximum amount of bytes passed to SSL_write() at the beginning of the data transfer. Default value 0 means there is no limit. Over SSL/TLS, the client can decipher the data only once it has received a full record. With large records, it means that clients might have to download up to 16kB of data before starting to process them. Limiting the value can improve page load times on browsers located over high latency or low bandwidth networks. It is suggested to find optimal values which fit into 1 or 2 TCP segments (generally 1448 bytes over Ethernet with TCP timestamps enabled, or 1460 when timestamps are disabled), keeping in mind that SSL/TLS add some overhead. Typical values of 1419 and 2859 gave good results during tests. Use "strace -e trace=write" to find the best value. HAProxy will automatically switch to this setting after an idle stream has been detected (see tune.idletimer above). See also tune.ssl.hard-maxrecord. ``` **tune.ssl.ssl-ctx-cache-size** <number> ``` Sets the size of the cache used to store generated certificates to <number> entries. This is a LRU cache. Because generating a SSL certificate dynamically is expensive, they are cached. The default cache size is set to 1000 entries. ``` **tune.vars.global-max-size** <size> **tune.vars.proc-max-size** <size> **tune.vars.reqres-max-size** <size> **tune.vars.sess-max-size** <size> **tune.vars.txn-max-size** <size> ``` These five tunes help to manage the maximum amount of memory used by the variables system. "global" limits the overall amount of memory available for all scopes. "[proc](#proc)" limits the memory for the process scope, "sess" limits the memory for the session scope, "txn" for the transaction scope, and "reqres" limits the memory for each request or response processing. Memory accounting is hierarchical, meaning more coarse grained limits include the finer grained ones: "[proc](#proc)" includes "sess", "sess" includes "txn", and "txn" includes "reqres". For example, when "[tune.vars.sess-max-size](#tune.vars.sess-max-size)" is limited to 100, "[tune.vars.txn-max-size](#tune.vars.txn-max-size)" and "[tune.vars.reqres-max-size](#tune.vars.reqres-max-size)" cannot exceed 100 either. If we create a variable "txn.var" that contains 100 bytes, all available space is consumed. Notice that exceeding the limits at runtime will not result in an error message, but values might be cut off or corrupted. So make sure to accurately plan for the amount of space needed to store all your variables. ``` **tune.zlib.memlevel** <number> ``` Sets the memLevel parameter in zlib initialization for each session. It defines how much memory should be allocated for the internal compression state. A value of 1 uses minimum memory but is slow and reduces compression ratio, a value of 9 uses maximum memory for optimal speed. Can be a value between 1 and 9. The default value is 8. ``` **tune.zlib.windowsize** <number> ``` Sets the window size (the size of the history buffer) as a parameter of the zlib initialization for each session. Larger values of this parameter result in better compression at the expense of memory usage. Can be a value between 8 and 15. The default value is 15. ``` ### 3.3. Debugging **anonkey** <key> ``` This sets the global anonymizing key to <key>, which must be a 32-bit number between 0 and 4294967295. This is the key that will be used by default by CLI commands when anonymized mode is enabled. This key may also be set at runtime from the CLI command "set global-key". See also command line argument "-dC" in the management manual. ``` **quick-exit** ``` This speeds up the old process exit upon reload by skipping the releasing of memory objects and listeners, since all of these are reclaimed by the operating system at the process' death. The gains are only marginal (in the order of a few hundred milliseconds for huge configurations at most). The main target usage in fact is when a bug is spotted in the deinit() code, as this allows to bypass it. It is better not to use this unless instructed to do so by developers. ``` **quiet** ``` Do not display any message during startup. It is equivalent to the command- line argument "-q". ``` **zero-warning** ``` When this option is set, HAProxy will refuse to start if any warning was emitted while processing the configuration. It is highly recommended to set this option on configurations that are not changed often, as it helps detect subtle mistakes and keep the configuration clean and forward-compatible. Note that "haproxy -c" will also report errors in such a case. This option is equivalent to command line argument "-dW". ``` ### 3.4. Userlists ``` It is possible to control access to frontend/backend/listen sections or to http stats by allowing only authenticated and authorized users. To do this, it is required to create at least one userlist and to define users. ``` **userlist** <listname> ``` Creates new userlist with name <listname>. Many independent userlists can be used to store authentication & authorization data for independent customers. ``` **group** <groupname> [users <user>,<user>,(...)] ``` Adds group <groupname> to the current userlist. It is also possible to attach users to this group by using a comma separated list of names proceeded by "users" keyword. ``` **user** <username> [password|insecure-password <password>] [groups <group>,<group>,(...)] ``` Adds user <username> to the current userlist. Both secure (encrypted) and insecure (unencrypted) passwords can be used. Encrypted passwords are evaluated using the crypt(3) function, so depending on the system's capabilities, different algorithms are supported. For example, modern Glibc based Linux systems support MD5, SHA-256, SHA-512, and, of course, the classic DES-based method of encrypting passwords. Attention: Be aware that using encrypted passwords might cause significantly increased CPU usage, depending on the number of requests, and the algorithm used. For any of the hashed variants, the password for each request must be processed through the chosen algorithm, before it can be compared to the value specified in the config file. Most current algorithms are deliberately designed to be expensive to compute to achieve resistance against brute force attacks. They do not simply salt/hash the clear text password once, but thousands of times. This can quickly become a major factor in HAProxy's overall CPU consumption! ``` Example: ``` userlist L1 group G1 users tiger,scott group G2 users xdb,scott user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91 user scott insecure-password elgato user xdb insecure-password hello userlist L2 group G1 group G2 user tiger password $6$k6y3o.eP$JlKBx(...)xHSwRv6J.C0/D7cV91 groups G1 user scott insecure-password elgato groups G1,G2 user xdb insecure-password hello groups G2 ``` ``` Please note that both lists are functionally identical. ``` ### 3.5. Peers ``` It is possible to propagate entries of any data-types in stick-tables between several HAProxy instances over TCP connections in a multi-master fashion. Each instance pushes its local updates and insertions to remote peers. The pushed values overwrite remote ones without aggregation. As an exception, the data type "conn_cur" is never learned from peers, as it is supposed to reflect local values. Earlier versions used to synchronize it and to cause negative values in active-active setups, and always-growing values upon reloads or active-passive switches because the local value would reflect more connections than locally present. This information, however, is pushed so that monitoring systems can watch it. Interrupted exchanges are automatically detected and recovered from the last known point. In addition, during a soft restart, the old process connects to the new one using such a TCP connection to push all its entries before the new process tries to connect to other peers. That ensures very fast replication during a reload, it typically takes a fraction of a second even for large tables. Note that Server IDs are used to identify servers remotely, so it is important that configurations look similar or at least that the same IDs are forced on each server on all participants. ``` **peers** <peersect> ``` Creates a new peer list with name <peersect>. It is an independent section, which is referenced by one or more stick-tables. ``` **bind** [<address>]:<port\_range> [, ...] [param\*] ``` Defines the binding parameters of the local peer of this "[peers](#peers)" section. Such lines are not supported with "[peer](#peer)" line in the same "[peers](#peers)" section. ``` **disabled** ``` Disables a peers section. It disables both listening and any synchronization related to this section. This is provided to disable synchronization of stick tables without having to comment out all "[peers](#peers)" references. ``` **default-bind** [param\*] ``` Defines the binding parameters for the local peer, excepted its address. ``` **default-server** [param\*] ``` Change default options for a server in a "[peers](#peers)" section. ``` Arguments: ``` <param*> is a list of parameters for this server. The "default-server" keyword accepts an important number of options and has a complete section dedicated to it. In a peers section, the transport parameters of a "default-server" line are supported. Please refer to [section 5](#5) for more details, and the "server" keyword below in this section for some of the restrictions. ``` **See also:** "server" and [section 5](#5) about server options **enabled** ``` This re-enables a peers section which was previously disabled via the "disabled" keyword. ``` **log** <address> [len <length>] [format <format>] [sample <ranges>:<sample\_size>] <facility> [<level> [<minlevel>]] ``` "[peers](#peers)" sections support the same "log" keyword as for the proxies to log information about the "[peers](#peers)" listener. See "log" option for proxies for more details. ``` **peer** <peername> <ip>:<port> [param\*] ``` Defines a peer inside a peers section. If <peername> is set to the local peer name (by default hostname, or forced using "-L" command line option or "[localpeer](#localpeer)" global configuration setting), HAProxy will listen for incoming remote peer connection on <ip>:<port>. Otherwise, <ip>:<port> defines where to connect to in order to join the remote peer, and <peername> is used at the protocol level to identify and validate the remote peer on the server side. During a soft restart, local peer <ip>:<port> is used by the old instance to connect the new one and initiate a complete replication (teaching process). It is strongly recommended to have the exact same peers declaration on all peers and to only rely on the "-L" command line argument or the "[localpeer](#localpeer)" global configuration setting to change the local peer name. This makes it easier to maintain coherent configuration files across all peers. You may want to reference some environment variables in the address parameter, see [section 2.3](#2.3) about environment variables. Note: "[peer](#peer)" keyword may transparently be replaced by "server" keyword (see "server" keyword explanation below). ``` **server** <peername> [<ip>:<port>] [param\*] ``` As previously mentioned, "[peer](#peer)" keyword may be replaced by "server" keyword with a support for all "server" parameters found in 5.2 paragraph that are related to transport settings. If the underlying peer is local, <ip>:<port> parameters must not be present; these parameters must be provided on a "bind" line (see "bind" keyword of this "[peers](#peers)" section). A number of "server" parameters are irrelevant for "[peers](#peers)" sections. Peers by nature do not support dynamic host name resolution nor health checks, hence parameters like "init_addr", "resolvers", "[check](#check)", "[agent-check](#agent-check)", or "[track](#track)" are not supported. Similarly, there is no load balancing nor stickiness, thus parameters such as "[weight](#weight)" or "cookie" have no effect. ``` Example: ``` # The old way. peers mypeers peer haproxy1 192.168.0.1:1024 peer haproxy2 192.168.0.2:1024 peer haproxy3 10.2.0.1:1024 backend mybackend mode tcp balance roundrobin stick-table type ip size 20k peers mypeers stick on src server srv1 192.168.0.30:80 server srv2 192.168.0.31:80 Example: peers mypeers bind 192.168.0.1:1024 ssl crt mycerts/pem default-server ssl verify none server haproxy1 #local peer server haproxy2 192.168.0.2:1024 server haproxy3 10.2.0.1:1024 ``` **shards** <shards> ``` In some configurations, one would like to distribute the stick-table contents to some peers in place of sending all the stick-table contents to each peer declared in the "[peers](#peers)" section. In such cases, "shards" specifies the number of peer involved in this stick-table contents distribution. See also "[shard](#shard)" server parameter. ``` **table** <tablename> type {ip | integer | string [len <length>] | binary [len <length>]} size <size> [expire <expire>] [nopurge] [store <data\_type>]\* ``` Configure a stickiness table for the current section. This line is parsed exactly the same way as the "[stick-table](#stick-table)" keyword in others section, except for the "[peers](#peers)" argument which is not required here and with an additional mandatory first parameter to designate the stick-table. Contrary to others sections, there may be several "[table](#table)" lines in "[peers](#peers)" sections (see also "[stick-table](#stick-table)" keyword). Also be aware of the fact that "[peers](#peers)" sections have their own stick-table namespaces to avoid collisions between stick-table names identical in different "[peers](#peers)" section. This is internally handled prepending the "[peers](#peers)" sections names to the name of the stick-tables followed by a '/' character. If somewhere else in the configuration file you have to refer to such stick-tables declared in "[peers](#peers)" sections you must use the prefixed version of the stick-table name as follows: peers mypeers peer A ... peer B ... table t1 ... frontend fe1 tcp-request content track-sc0 src table mypeers/t1 This is also this prefixed version of the stick-table names which must be used to refer to stick-tables through the CLI. About "[peers](#peers)" protocol, as only "[peers](#peers)" belonging to the same section may communicate with each others, there is no need to do such a distinction. Several "[peers](#peers)" sections may declare stick-tables with the same name. This is shorter version of the stick-table name which is sent over the network. There is only a '/' character as prefix to avoid stick-table name collisions between stick-tables declared as backends and stick-table declared in "[peers](#peers)" sections as follows in this weird but supported configuration: peers mypeers peer A ... peer B ... table t1 type string size 10m store gpc0 backend t1 stick-table type string size 10m store gpc0 peers mypeers Here "t1" table declared in "mypeers" section has "mypeers/t1" as global name. "t1" table declared as a backend as "t1" as global name. But at peer protocol level the former table is named "/t1", the latter is again named "t1". ``` ### 3.6. Mailers ``` It is possible to send email alerts when the state of servers changes. If configured email alerts are sent to each mailer that is configured in a mailers section. Email is sent to mailers using SMTP. ``` **mailers** <mailersect> ``` Creates a new mailer list with the name <mailersect>. It is an independent section which is referenced by one or more proxies. ``` **mailer** <mailername> <ip>:<port> ``` Defines a mailer inside a mailers section. ``` Example: ``` mailers mymailers mailer smtp1 192.168.0.1:587 mailer smtp2 192.168.0.2:587 backend mybackend mode tcp balance roundrobin email-alert mailers mymailers email-alert from [email protected] email-alert to [email protected] server srv1 192.168.0.30:80 server srv2 192.168.0.31:80 ``` **timeout mail** <time> ``` Defines the time available for a mail/connection to be made and send to the mail-server. If not defined the default value is 10 seconds. To allow for at least two SYN-ACK packets to be send during initial TCP handshake it is advised to keep this value above 4 seconds. ``` Example: ``` mailers mymailers timeout mail 20s mailer smtp1 192.168.0.1:587 ``` ### 3.7. Programs ``` In master-worker mode, it is possible to launch external binaries with the master, these processes are called programs. These programs are launched and managed the same way as the workers. During a reload of HAProxy, those processes are dealing with the same sequence as a worker: - the master is re-executed - the master sends a SIGUSR1 signal to the program - if "[option start-on-reload](#option%20start-on-reload)" is not disabled, the master launches a new instance of the program During a stop, or restart, a SIGTERM is sent to the programs. ``` **program** <name> ``` This is a new program section, this section will create an instance <name> which is visible in "show proc" on the master CLI. (See "9.4. Master CLI" in the management guide). ``` **command** <command> [arguments\*] ``` Define the command to start with optional arguments. The command is looked up in the current PATH if it does not include an absolute path. This is a mandatory option of the program section. Arguments containing spaces must be enclosed in quotes or double quotes or be prefixed by a backslash. ``` **user** <user name> ``` Changes the executed command user ID to the <user name> from /etc/passwd. See also "group". ``` **group** <group name> ``` Changes the executed command group ID to the <group name> from /etc/group. See also "user". ``` **option start-on-reload** **no option start-on-reload** ``` Start (or not) a new instance of the program upon a reload of the master. The default is to start a new instance. This option may only be used in a program section. ``` ### 3.8. HTTP-errors ``` It is possible to globally declare several groups of HTTP errors, to be imported afterwards in any proxy section. Same group may be referenced at several places and can be fully or partially imported. ``` **http-errors** <name> ``` Create a new http-errors group with the name <name>. It is an independent section that may be referenced by one or more proxies using its name. ``` **errorfile** <code> <file> ``` Associate a file contents to an HTTP error code ``` Arguments : ``` <code> is the HTTP status code. Currently, HAProxy is capable of generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, 425, 429, 500, 501, 502, 503, and 504. <file> designates a file containing the full HTTP response. It is recommended to follow the common practice of appending ".http" to the filename so that people do not confuse the response with HTML error pages, and to use absolute paths, since files are read before any chroot is performed. ``` ``` Please referrers to "errorfile" keyword in [section 4](#4) for details. ``` Example: ``` http-errors website-1 errorfile 400 /etc/haproxy/errorfiles/site1/400.http errorfile 404 /etc/haproxy/errorfiles/site1/404.http errorfile 408 /dev/null # work around Chrome pre-connect bug http-errors website-2 errorfile 400 /etc/haproxy/errorfiles/site2/400.http errorfile 404 /etc/haproxy/errorfiles/site2/404.http errorfile 408 /dev/null # work around Chrome pre-connect bug ``` ### 3.9. Rings ``` It is possible to globally declare ring-buffers, to be used as target for log servers or traces. ``` **ring** <ringname> ``` Creates a new ring-buffer with name <ringname>. ``` **backing-file** <path> ``` This replaces the regular memory allocation by a RAM-mapped file to store the ring. This can be useful for collecting traces or logs for post-mortem analysis, without having to attach a slow client to the CLI. Newer contents will automatically replace older ones so that the latest contents are always available. The contents written to the ring will be visible in that file once the process stops (most often they will even be seen very soon after but there is no such guarantee since writes are not synchronous). When this option is used, the total storage area is reduced by the size of the "struct ring" that starts at the beginning of the area, and that is required to recover the area's contents. The file will be created with the starting user's ownership, with mode 0600 and will be of the size configured by the "[size](#size)" directive. When the directive is parsed (thus even during config checks), any existing non-empty file will first be renamed with the extra suffix ".bak", and any previously existing file with suffix ".bak" will be removed. This ensures that instant reload or restart of the process will not wipe precious debugging information, and will leave time for an admin to spot this new ".bak" file and to archive it if needed. As such, after a crash the file designated by <path> will contain the freshest information, and if the service is restarted, the "<path>.bak" file will have it instead. This means that the total storage capacity required will be double of the ring size. Failures to rotate the file are silently ignored, so placing the file into a directory without write permissions will be sufficient to avoid the backup file if not desired. WARNING: there are stability and security implications in using this feature. First, backing the ring to a slow device (e.g. physical hard drive) may cause perceptible slowdowns during accesses, and possibly even panics if too many threads compete for accesses. Second, an external process modifying the area could cause the haproxy process to crash or to overwrite some of its own memory with traces. Third, if the file system fills up before the ring, writes to the ring may cause the process to crash. The information present in this ring are structured and are NOT directly readable using a text editor (even though most of it looks barely readable). The output of this file is only intended for developers. ``` **description** <text> ``` The description is an optional description string of the ring. It will appear on CLI. By default, <name> is reused to fill this field. ``` **format** <format> ``` Format used to store events into the ring buffer. ``` Arguments: ``` <format> is the log format used when generating syslog messages. It may be one of the following : iso A message containing only the ISO date, followed by the text. The PID, process name and system name are omitted. This is designed to be used with a local log server. local Analog to rfc3164 syslog message format except that hostname field is stripped. This is the default. Note: option "[log-send-hostname](#log-send-hostname)" switches the default to rfc3164. raw A message containing only the text. The level, PID, date, time, process name and system name are omitted. This is designed to be used in containers or during development, where the severity only depends on the file descriptor used (stdout/stderr). This is the default. rfc3164 The RFC3164 syslog message format. (https://tools.ietf.org/html/rfc3164) rfc5424 The RFC5424 syslog message format. (https://tools.ietf.org/html/rfc5424) short A message containing only a level between angle brackets such as '<3>', followed by the text. The PID, date, time, process name and system name are omitted. This is designed to be used with a local log server. This format is compatible with what the systemd logger consumes. priority A message containing only a level plus syslog facility between angle brackets such as '<63>', followed by the text. The PID, date, time, process name and system name are omitted. This is designed to be used with a local log server. timed A message containing only a level between angle brackets such as '<3>', followed by ISO date and by the text. The PID, process name and system name are omitted. This is designed to be used with a local log server. ``` **maxlen** <length> ``` The maximum length of an event message stored into the ring, including formatted header. If an event message is longer than <length>, it will be truncated to this length. ``` **server** <name> <address> [param\*] ``` Used to configure a syslog tcp server to forward messages from ring buffer. This supports for all "server" parameters found in 5.2 paragraph. Some of these parameters are irrelevant for "[ring](#ring)" sections. Important point: there is little reason to add more than one server to a ring, because all servers will receive the exact same copy of the ring contents, and as such the ring will progress at the speed of the slowest server. If one server does not respond, it will prevent old messages from being purged and may block new messages from being inserted into the ring. The proper way to send messages to multiple servers is to use one distinct ring per log server, not to attach multiple servers to the same ring. Note that specific server directive "[log-proto](#log-proto)" is used to set the protocol used to send messages. ``` **size** <size> ``` This is the optional size in bytes for the ring-buffer. Default value is set to BUFSIZE. ``` **timeout connect** <timeout> ``` Set the maximum time to wait for a connection attempt to a server to succeed. ``` Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` **timeout server** <timeout> ``` Set the maximum time for pending data staying into output buffer. ``` Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` Example: ``` global log ring@myring local7 ring myring description "My local buffer" format rfc3164 maxlen 1200 size 32764 timeout connect 5s timeout server 10s server mysyslogsrv 127.0.0.1:6514 log-proto octet-count ``` ### 3.10. Log forwarding ``` It is possible to declare one or multiple log forwarding section, HAProxy will forward all received log messages to a log servers list. ``` **log-forward** <name> ``` Creates a new log forwarder proxy identified as <name>. ``` **backlog** <conns> ``` Give hints to the system about the approximate listen backlog desired size on connections accept. ``` **bind** <addr> [param\*] ``` Used to configure a stream log listener to receive messages to forward. This supports the "bind" parameters found in 5.1 paragraph including those about ssl but some statements such as "alpn" may be irrelevant for syslog protocol over TCP. Those listeners support both "Octet Counting" and "Non-Transparent-Framing" modes as defined in rfc-6587. ``` **dgram-bind** <addr> [param\*] ``` Used to configure a datagram log listener to receive messages to forward. Addresses must be in IPv4 or IPv6 form,followed by a port. This supports for some of the "bind" parameters found in 5.1 paragraph among which "[interface](#interface)", "namespace" or "[transparent](#option%20transparent)", the other ones being silently ignored as irrelevant for UDP/syslog case. ``` **log global** **log** <address> [len <length>] [format <format>] [sample <ranges>:<sample\_size>] <facility> [<level> [<minlevel>]] ``` Used to configure target log servers. See more details on proxies documentation. If no format specified, HAProxy tries to keep the incoming log format. Configured facility is ignored, except if incoming message does not present a facility but one is mandatory on the outgoing format. If there is no timestamp available in the input format, but the field exists in output format, HAProxy will use the local date. ``` Example: ``` global log stderr format iso local7 ring myring description "My local buffer" format rfc5424 maxlen 1200 size 32764 timeout connect 5s timeout server 10s # syslog tcp server server mysyslogsrv 127.0.0.1:514 log-proto octet-count log-forward sylog-loadb dgram-bind 127.0.0.1:1514 bind 127.0.0.1:1514 # all messages on stderr log global # all messages on local tcp syslog server log ring@myring local0 # load balance messages on 4 udp syslog servers log 127.0.0.1:10001 sample 1:4 local0 log 127.0.0.1:10002 sample 2:4 local0 log 127.0.0.1:10003 sample 3:4 local0 log 127.0.0.1:10004 sample 4:4 local0 ``` **maxconn** <conns> ``` Fix the maximum number of concurrent connections on a log forwarder. 10 is the default. ``` **timeout client** <timeout> ``` Set the maximum inactivity time on the client side. ``` 4. Proxies ----------- ``` Proxy configuration can be located in a set of sections : - defaults [<name>] [ from <defaults_name> ] - frontend <name> [ from <defaults_name> ] - backend <name> [ from <defaults_name> ] - listen <name> [ from <defaults_name> ] A "frontend" section describes a set of listening sockets accepting client connections. A "backend" section describes a set of servers to which the proxy will connect to forward incoming connections. A "listen" section defines a complete proxy with its frontend and backend parts combined in one section. It is generally useful for TCP-only traffic. A "defaults" section resets all settings to the documented ones and presets new ones for use by subsequent sections. All of "frontend", "backend" and "listen" sections always take their initial settings from a defaults section, by default the latest one that appears before the newly created section. It is possible to explicitly designate a specific "defaults" section to load the initial settings from by indicating its name on the section line after the optional keyword "from". While "defaults" section do not impose a name, this use is encouraged for better readability. It is also the only way to designate a specific section to use instead of the default previous one. Since "defaults" section names are optional, by default a very permissive check is applied on their name and these are even permitted to overlap. However if a "defaults" section is referenced by any other section, its name must comply with the syntax imposed on all proxy names, and this name must be unique among the defaults sections. Please note that regardless of what is currently permitted, it is recommended to avoid duplicate section names in general and to respect the same syntax as for proxy names. This rule might be enforced in a future version. In addition, a warning is emitted if a defaults section is explicitly used by a proxy while it is also implicitly used by another one because it is the last one defined. It is highly encouraged to not mix both usages by always using explicit references or by adding a last common defaults section reserved for all implicit uses. Note that it is even possible for a defaults section to take its initial settings from another one, and as such, inherit settings across multiple levels of defaults sections. This can be convenient to establish certain configuration profiles to carry groups of default settings (e.g. TCP vs HTTP or short vs long timeouts) but can quickly become confusing to follow. All proxy names must be formed from upper and lower case letters, digits, '-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). ACL names are case-sensitive, which means that "www" and "WWW" are two different proxies. Historically, all proxy names could overlap, it just caused troubles in the logs. Since the introduction of content switching, it is mandatory that two proxies with overlapping capabilities (frontend/backend) have different names. However, it is still permitted that a frontend and a backend share the same name, as this configuration seems to be commonly encountered. Right now, two major proxy modes are supported : "tcp", also known as layer 4, and "http", also known as layer 7. In layer 4 mode, HAProxy simply forwards bidirectional traffic between two sides. In layer 7 mode, HAProxy analyzes the protocol, and can interact with it by allowing, blocking, switching, adding, modifying, or removing arbitrary contents in requests or responses, based on arbitrary criteria. In HTTP mode, the processing applied to requests and responses flowing over a connection depends in the combination of the frontend's HTTP options and the backend's. HAProxy supports 3 connection modes : - KAL : keep alive ("[option http-keep-alive](#option%20http-keep-alive)") which is the default mode : all requests and responses are processed, and connections remain open but idle between responses and new requests. - SCL: server close ("[option http-server-close](#option%20http-server-close)") : the server-facing connection is closed after the end of the response is received, but the client-facing connection remains open. - CLO: close ("[option httpclose](#option%20httpclose)"): the connection is closed after the end of the response and "Connection: close" appended in both directions. The effective mode that will be applied to a connection passing through a frontend and a backend can be determined by both proxy modes according to the following matrix, but in short, the modes are symmetric, keep-alive is the weakest option and close is the strongest. Backend mode | KAL | SCL | CLO ----+-----+-----+---- KAL | KAL | SCL | CLO ----+-----+-----+---- mode SCL | SCL | SCL | CLO ----+-----+-----+---- CLO | CLO | CLO | CLO It is possible to chain a TCP frontend to an HTTP backend. It is pointless if only HTTP traffic is handled. But it may be used to handle several protocols within the same frontend. In this case, the client's connection is first handled as a raw tcp connection before being upgraded to HTTP. Before the upgrade, the content processings are performend on raw data. Once upgraded, data is parsed and stored using an internal representation called HTX and it is no longer possible to rely on raw representation. There is no way to go back. There are two kind of upgrades, in-place upgrades and destructive upgrades. The first ones involves a TCP to HTTP/1 upgrade. In HTTP/1, the request processings are serialized, thus the applicative stream can be preserved. The second one involves a TCP to HTTP/2 upgrade. Because it is a multiplexed protocol, the applicative stream cannot be associated to any HTTP/2 stream and is destroyed. New applicative streams are then created when HAProxy receives new HTTP/2 streams at the lower level, in the H2 multiplexer. It is important to understand this difference because that drastically changes the way to process data. When an HTTP/1 upgrade is performed, the content processings already performed on raw data are neither lost nor reexecuted while for an HTTP/2 upgrade, applicative streams are distinct and all frontend rules are evaluated systematically on each one. And as said, the first stream, the TCP one, is destroyed, but only after the frontend rules were evaluated. There is another importnat point to understand when HTTP processings are performed from a TCP proxy. While HAProxy is able to parse HTTP/1 in-fly from tcp-request content rules, it is not possible for HTTP/2. Only the HTTP/2 preface can be parsed. This is a huge limitation regarding the HTTP content analysis in TCP. Concretely it is only possible to know if received data are HTTP. For instance, it is not possible to choose a backend based on the Host header value while it is trivial in HTTP/1. Hopefully, there is a solution to mitigate this drawback. There are two ways to perform an HTTP upgrade. The first one, the historical method, is to select an HTTP backend. The upgrade happens when the backend is set. Thus, for in-place upgrades, only the backend configuration is considered in the HTTP data processing. For destructive upgrades, the applicative stream is destroyed, thus its processing is stopped. With this method, possibilities to choose a backend with an HTTP/2 connection are really limited, as mentioned above, and a bit useless because the stream is destroyed. The second method is to upgrade during the tcp-request content rules evaluation, thanks to the "switch-mode http" action. In this case, the upgrade is performed in the frontend context and it is possible to define HTTP directives in this frontend. For in-place upgrades, it offers all the power of the HTTP analysis as soon as possible. It is not that far from an HTTP frontend. For destructive upgrades, it does not change anything except it is useless to choose a backend on limited information. It is of course the recommended method. Thus, testing the request protocol from the tcp-request content rules to perform an HTTP upgrade is enough. All the remaining HTTP manipulation may be moved to the frontend http-request ruleset. But keep in mind that tcp-request content rules remains evaluated on each streams, that can't be changed. ``` ### 4.1. Proxy keywords matrix ``` The following list of keywords is supported. Most of them may only be used in a limited set of section types. Some of them are marked as "deprecated" because they are inherited from an old syntax which may be confusing or functionally limited, and there are new recommended keywords to replace them. Keywords marked with "(*)" can be optionally inverted using the "no" prefix, e.g. "no option contstats". This makes sense when the option has been enabled by default and must be disabled for a specific instance. Such options may also be prefixed with "default" in order to restore default settings regardless of what has been specified in a previous "defaults" section. Keywords supported in defaults sections marked with "(!)" are only supported in named defaults sections, not anonymous ones. ``` | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [acl](#4-acl) | X (!) | | | | | [backlog](#4-backlog) | | | | | | [balance](#4-balance) | | | | | | [bind](#4-bind) | | | | | | [capture cookie](#4-capture%20cookie) | | | | | | [capture request header](#4-capture%20request%20header) | | | | | | [capture response header](#4-capture%20response%20header) | | | | | | [clitcpka-cnt](#4-clitcpka-cnt) | | | | | | [clitcpka-idle](#4-clitcpka-idle) | | | | | | [clitcpka-intvl](#4-clitcpka-intvl) | | | | | | [compression](#4-compression) | | | | | | [cookie](#4-cookie) | | | | | | [declare capture](#4-declare%20capture) | | | | | | [default-server](#4-default-server) | | | | | | [default\_backend](#4-default_backend) | | | | | | [description](#4-description) | | | | | | [disabled](#4-disabled) | | | | | | [dispatch](#4-dispatch) | | | | | | [email-alert from](#4-email-alert%20from) | | | | | | [email-alert level](#4-email-alert%20level) | | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [email-alert mailers](#4-email-alert%20mailers) | | | | | | [email-alert myhostname](#4-email-alert%20myhostname) | | | | | | [email-alert to](#4-email-alert%20to) | | | | | | [enabled](#4-enabled) | | | | | | [errorfile](#4-errorfile) | | | | | | [errorfiles](#4-errorfiles) | | | | | | [errorloc](#4-errorloc) | | | | | | [errorloc302](#4-errorloc302) | | | | | | [errorloc303](#4-errorloc303) | | | | | | [error-log-format](#4-error-log-format) | | | | | | [force-persist](#4-force-persist) | | | | | | [filter](#4-filter) | | | | | | [fullconn](#4-fullconn) | | | | | | [hash-type](#4-hash-type) | | | | | | [http-after-response](#4-http-after-response) | X (!) | | | | | [http-check comment](#4-http-check%20comment) | | | | | | [http-check connect](#4-http-check%20connect) | | | | | | [http-check disable-on-404](#4-http-check%20disable-on-404) | | | | | | [http-check expect](#4-http-check%20expect) | | | | | | [http-check send](#4-http-check%20send) | | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [http-check send-state](#4-http-check%20send-state) | | | | | | [http-check set-var](#4-http-check%20set-var) | | | | | | [http-check unset-var](#4-http-check%20unset-var) | | | | | | [http-error](#4-http-error) | | | | | | [http-request](#4-http-request) | X (!) | | | | | [http-response](#4-http-response) | X (!) | | | | | [http-reuse](#4-http-reuse) | | | | | | [http-send-name-header](#4-http-send-name-header) | | | | | | [id](#4-id) | | | | | | [ignore-persist](#4-ignore-persist) | | | | | | [load-server-state-from-file](#4-load-server-state-from-file) | | | | | | [(\*)log](#4-log) | | | | | | [log-format](#4-log-format) | | | | | | [log-format-sd](#4-log-format-sd) | | | | | | [log-tag](#4-log-tag) | | | | | | [max-keep-alive-queue](#4-max-keep-alive-queue) | | | | | | [maxconn](#4-maxconn) | | | | | | [mode](#4-mode) | | | | | | [monitor fail](#4-monitor%20fail) | | | | | | [monitor-uri](#4-monitor-uri) | | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [(\*)option abortonclose](#4-option%20abortonclose) | | | | | | [(\*)option accept-invalid-http-request](#4-option%20accept-invalid-http-request) | | | | | | [(\*)option accept-invalid-http-response](#4-option%20accept-invalid-http-response) | | | | | | [(\*)option allbackups](#4-option%20allbackups) | | | | | | [(\*)option checkcache](#4-option%20checkcache) | | | | | | [(\*)option clitcpka](#4-option%20clitcpka) | | | | | | [(\*)option contstats](#4-option%20contstats) | | | | | | [(\*)option disable-h2-upgrade](#4-option%20disable-h2-upgrade) | | | | | | [(\*)option dontlog-normal](#4-option%20dontlog-normal) | | | | | | [(\*)option dontlognull](#4-option%20dontlognull) | | | | | | [option forwardfor](#4-option%20forwardfor) | | | | | | [(\*)option h1-case-adjust-bogus-client](#4-option%20h1-case-adjust-bogus-client) | | | | | | [(\*)option h1-case-adjust-bogus-server](#4-option%20h1-case-adjust-bogus-server) | | | | | | [(\*)option http-buffer-request](#4-option%20http-buffer-request) | | | | | | [(\*)option http-ignore-probes](#4-option%20http-ignore-probes) | | | | | | [(\*)option http-keep-alive](#4-option%20http-keep-alive) | | | | | | [(\*)option http-no-delay](#4-option%20http-no-delay) | | | | | | [(\*)option http-pretend-keepalive](#4-option%20http-pretend-keepalive) | | | | | | [option http-restrict-req-hdr-names](#4-option%20http-restrict-req-hdr-names) | | | | | | [(\*)option http-server-close](#4-option%20http-server-close) | | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [(\*)option http-use-proxy-header](#4-option%20http-use-proxy-header) | | | | | | [option httpchk](#4-option%20httpchk) | | | | | | [(\*)option httpclose](#4-option%20httpclose) | | | | | | [option httplog](#4-option%20httplog) | | | | | | [option httpslog](#4-option%20httpslog) | | | | | | [(\*)option independent-streams](#4-option%20independent-streams) | | | | | | [option ldap-check](#4-option%20ldap-check) | | | | | | [option external-check](#4-option%20external-check) | | | | | | [(\*)option log-health-checks](#4-option%20log-health-checks) | | | | | | [(\*)option log-separate-errors](#4-option%20log-separate-errors) | | | | | | [(\*)option logasap](#4-option%20logasap) | | | | | | [option mysql-check](#4-option%20mysql-check) | | | | | | [(\*)option nolinger](#4-option%20nolinger) | | | | | | [option originalto](#4-option%20originalto) | | | | | | [(\*)option persist](#4-option%20persist) | | | | | | [option pgsql-check](#4-option%20pgsql-check) | | | | | | [(\*)option prefer-last-server](#4-option%20prefer-last-server) | | | | | | [(\*)option redispatch](#4-option%20redispatch) | | | | | | [option redis-check](#4-option%20redis-check) | | | | | | [option smtpchk](#4-option%20smtpchk) | | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [(\*)option socket-stats](#4-option%20socket-stats) | | | | | | [(\*)option splice-auto](#4-option%20splice-auto) | | | | | | [(\*)option splice-request](#4-option%20splice-request) | | | | | | [(\*)option splice-response](#4-option%20splice-response) | | | | | | [option spop-check](#4-option%20spop-check) | | | | | | [(\*)option srvtcpka](#4-option%20srvtcpka) | | | | | | [option ssl-hello-chk](#4-option%20ssl-hello-chk) | | | | | | [option tcp-check](#4-option%20tcp-check) | | | | | | [(\*)option tcp-smart-accept](#4-option%20tcp-smart-accept) | | | | | | [(\*)option tcp-smart-connect](#4-option%20tcp-smart-connect) | | | | | | [option tcpka](#4-option%20tcpka) | | | | | | [option tcplog](#4-option%20tcplog) | | | | | | [(\*)option transparent](#4-option%20transparent) | | | | | | [(\*)option idle-close-on-response](#4-option%20idle-close-on-response) | | | | | | [external-check command](#4-external-check%20command) | | | | | | [external-check path](#4-external-check%20path) | | | | | | [persist rdp-cookie](#4-persist%20rdp-cookie) | | | | | | [rate-limit sessions](#4-rate-limit%20sessions) | | | | | | [redirect](#4-redirect) | | | | | | [retries](#4-retries) | | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [retry-on](#4-retry-on) | | | | | | [server](#4-server) | | | | | | [server-state-file-name](#4-server-state-file-name) | | | | | | [server-template](#4-server-template) | | | | | | [source](#4-source) | | | | | | [srvtcpka-cnt](#4-srvtcpka-cnt) | | | | | | [srvtcpka-idle](#4-srvtcpka-idle) | | | | | | [srvtcpka-intvl](#4-srvtcpka-intvl) | | | | | | [stats admin](#4-stats%20admin) | | | | | | [stats auth](#4-stats%20auth) | | | | | | [stats enable](#4-stats%20enable) | | | | | | [stats hide-version](#4-stats%20hide-version) | | | | | | [stats http-request](#4-stats%20http-request) | | | | | | [stats realm](#4-stats%20realm) | | | | | | [stats refresh](#4-stats%20refresh) | | | | | | [stats scope](#4-stats%20scope) | | | | | | [stats show-desc](#4-stats%20show-desc) | | | | | | [stats show-legends](#4-stats%20show-legends) | | | | | | [stats show-node](#4-stats%20show-node) | | | | | | [stats uri](#4-stats%20uri) | | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [stick match](#4-stick%20match) | | | | | | [stick on](#4-stick%20on) | | | | | | [stick store-request](#4-stick%20store-request) | | | | | | [stick store-response](#4-stick%20store-response) | | | | | | [stick-table](#4-stick-table) | | | | | | [tcp-check comment](#4-tcp-check%20comment) | | | | | | [tcp-check connect](#4-tcp-check%20connect) | | | | | | [tcp-check expect](#4-tcp-check%20expect) | | | | | | [tcp-check send](#4-tcp-check%20send) | | | | | | [tcp-check send-lf](#4-tcp-check%20send-lf) | | | | | | [tcp-check send-binary](#4-tcp-check%20send-binary) | | | | | | [tcp-check send-binary-lf](#4-tcp-check%20send-binary-lf) | | | | | | [tcp-check set-var](#4-tcp-check%20set-var) | | | | | | [tcp-check unset-var](#4-tcp-check%20unset-var) | | | | | | [tcp-request connection](#4-tcp-request%20connection) | X (!) | | | | | [tcp-request content](#4-tcp-request%20content) | X (!) | | | | | [tcp-request inspect-delay](#4-tcp-request%20inspect-delay) | X (!) | | | | | [tcp-request session](#4-tcp-request%20session) | X (!) | | | | | [tcp-response content](#4-tcp-response%20content) | X (!) | | | | | [tcp-response inspect-delay](#4-tcp-response%20inspect-delay) | X (!) | | | | | keyword | defaults | frontend | listen | backend | | --- | --- | --- | --- | --- | | [timeout check](#4-timeout%20check) | | | | | | [timeout client](#4-timeout%20client) | | | | | | [timeout client-fin](#4-timeout%20client-fin) | | | | | | [timeout connect](#4-timeout%20connect) | | | | | | [timeout http-keep-alive](#4-timeout%20http-keep-alive) | | | | | | [timeout http-request](#4-timeout%20http-request) | | | | | | [timeout queue](#4-timeout%20queue) | | | | | | [timeout server](#4-timeout%20server) | | | | | | [timeout server-fin](#4-timeout%20server-fin) | | | | | | [timeout tarpit](#4-timeout%20tarpit) | | | | | | [timeout tunnel](#4-timeout%20tunnel) | | | | | | [(deprecated)transparent](#4-transparent) | | | | | | [unique-id-format](#4-unique-id-format) | | | | | | [unique-id-header](#4-unique-id-header) | | | | | | [use\_backend](#4-use_backend) | | | | | | [use-fcgi-app](#4-use-fcgi-app) | | | | | | [use-server](#4-use-server) | | | | | ### 4.2. Alphabetically sorted keywords reference ``` This section provides a description of each keyword and its usage. ``` **acl** <aclname> <criterion> [flags] [operator] <value> ... ``` Declare or complete an access list. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | yes | ``` This directive is only available from named defaults sections, not anonymous ones. ACLs defined in a defaults section are not visible from other sections using it. ``` Example: ``` acl invalid_src src 0.0.0.0/7 224.0.0.0/3 acl invalid_src src_port 0:1023 acl local_dst hdr(host) -i localhost ``` ``` See [section 7](#7) about ACL usage. ``` **backlog** <conns> ``` Give hints to the system about the approximate listen backlog desired size ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <conns> is the number of pending connections. Depending on the operating system, it may represent the number of already acknowledged connections, of non-acknowledged ones, or both. ``` ``` In order to protect against SYN flood attacks, one solution is to increase the system's SYN backlog size. Depending on the system, sometimes it is just tunable via a system parameter, sometimes it is not adjustable at all, and sometimes the system relies on hints given by the application at the time of the listen() syscall. By default, HAProxy passes the frontend's maxconn value to the listen() syscall. On systems which can make use of this value, it can sometimes be useful to be able to specify a different value, hence this backlog parameter. On Linux 2.4, the parameter is ignored by the system. On Linux 2.6, it is used as a hint and the system accepts up to the smallest greater power of two, and never more than some limits (usually 32768). ``` **See also :** "maxconn" and the target operating system's tuning guide. **balance** <algorithm> [ <arguments> ] **balance url\_param** <param> [check\_post] ``` Define the load balancing algorithm to be used in a backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <algorithm> is the algorithm used to select a server when doing load balancing. This only applies when no persistence information is available, or when a connection is redispatched to another server. <algorithm> may be one of the following : roundrobin Each server is used in turns, according to their weights. This is the smoothest and fairest algorithm when the server's processing time remains equally distributed. This algorithm is dynamic, which means that server weights may be adjusted on the fly for slow starts for instance. It is limited by design to 4095 active servers per backend. Note that in some large farms, when a server becomes up after having been down for a very short time, it may sometimes take a few hundreds requests for it to be re-integrated into the farm and start receiving traffic. This is normal, though very rare. It is indicated here in case you would have the chance to observe it, so that you don't worry. static-rr Each server is used in turns, according to their weights. This algorithm is as similar to roundrobin except that it is static, which means that changing a server's weight on the fly will have no effect. On the other hand, it has no design limitation on the number of servers, and when a server goes up, it is always immediately reintroduced into the farm, once the full map is recomputed. It also uses slightly less CPU to run (around -1%). leastconn The server with the lowest number of connections receives the connection. Round-robin is performed within groups of servers of the same load to ensure that all servers will be used. Use of this algorithm is recommended where very long sessions are expected, such as LDAP, SQL, TSE, etc... but is not very well suited for protocols using short sessions such as HTTP. This algorithm is dynamic, which means that server weights may be adjusted on the fly for slow starts for instance. It will also consider the number of queued connections in addition to the established ones in order to minimize queuing. first The first server with available connection slots receives the connection. The servers are chosen from the lowest numeric identifier to the highest (see server parameter "id"), which defaults to the server's position in the farm. Once a server reaches its maxconn value, the next server is used. It does not make sense to use this algorithm without setting maxconn. The purpose of this algorithm is to always use the smallest number of servers so that extra servers can be powered off during non-intensive hours. This algorithm ignores the server weight, and brings more benefit to long session such as RDP or IMAP than HTTP, though it can be useful there too. In order to use this algorithm efficiently, it is recommended that a cloud controller regularly checks server usage to turn them off when unused, and regularly checks backend queue to turn new servers on when the queue inflates. Alternatively, using "[http-check send-state](#http-check%20send-state)" may inform servers on the load. hash Takes a regular sample expression in argument. The expression is evaluated for each request and hashed according to the configured hash-type. The result of the hash is divided by the total weight of the running servers to designate which server will receive the request. This can be used in place of "source", "uri", "hdr()", "url_param()", "rdp-cookie" to make use of a converter, refine the evaluation, or be used to extract data from local variables for example. When the data is not available, round robin will apply. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "[hash-type](#hash-type)". source The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up. If the hash result changes due to the number of running servers changing, many clients will be directed to a different server. This algorithm is generally used in TCP mode where no cookie may be inserted. It may also be used on the Internet to provide a best-effort stickiness to clients which refuse session cookies. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "[hash-type](#hash-type)". See also the "hash" option above. uri This algorithm hashes either the left part of the URI (before the question mark) or the whole URI (if the "whole" parameter is present) and divides the hash value by the total weight of the running servers. The result designates which server will receive the request. This ensures that the same URI will always be directed to the same server as long as no server goes up or down. This is used with proxy caches and anti-virus proxies in order to maximize the cache hit rate. Note that this algorithm may only be used in an HTTP backend. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "[hash-type](#hash-type)". This algorithm supports two optional parameters "len" and "depth", both followed by a positive integer number. These options may be helpful when it is needed to balance servers based on the beginning of the URI only. The "len" parameter indicates that the algorithm should only consider that many characters at the beginning of the URI to compute the hash. Note that having "len" set to 1 rarely makes sense since most URIs start with a leading "/". The "depth" parameter indicates the maximum directory depth to be used to compute the hash. One level is counted for each slash in the request. If both parameters are specified, the evaluation stops when either is reached. A "path-only" parameter indicates that the hashing key starts at the first '/' of the path. This can be used to ignore the authority part of absolute URIs, and to make sure that HTTP/1 and HTTP/2 URIs will provide the same hash. See also the "hash" option above. url_param The URL parameter specified in argument will be looked up in the query string of each HTTP GET request. If the modifier "check_post" is used, then an HTTP POST request entity will be searched for the parameter argument, when it is not found in a query string after a question mark ('?') in the URL. The message body will only start to be analyzed once either the advertised amount of data has been received or the request buffer is full. In the unlikely event that chunked encoding is used, only the first chunk is scanned. Parameter values separated by a chunk boundary, may be randomly balanced if at all. This keyword used to support an optional <max_wait> parameter which is now ignored. If the parameter is found followed by an equal sign ('=') and a value, then the value is hashed and divided by the total weight of the running servers. The result designates which server will receive the request. This is used to track user identifiers in requests and ensure that a same user ID will always be sent to the same server as long as no server goes up or down. If no value is found or if the parameter is not found, then a round robin algorithm is applied. Note that this algorithm may only be used in an HTTP backend. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "[hash-type](#hash-type)". See also the "hash" option above. hdr(<name>) The HTTP header <name> will be looked up in each HTTP request. Just as with the equivalent ACL 'hdr()' function, the header name in parenthesis is not case sensitive. If the header is absent or if it does not contain any value, the roundrobin algorithm is applied instead. An optional 'use_domain_only' parameter is available, for reducing the hash algorithm to the main domain part with some specific headers such as 'Host'. For instance, in the Host value "haproxy.1wt.eu", only "1wt" will be considered. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "[hash-type](#hash-type)". See also the "hash" option above. random random(<draws>) A random number will be used as the key for the consistent hashing function. This means that the servers' weights are respected, dynamic weight changes immediately take effect, as well as new server additions. Random load balancing can be useful with large farms or when servers are frequently added or removed as it may avoid the hammering effect that could result from roundrobin or leastconn in this situation. The hash-balance-factor directive can be used to further improve fairness of the load balancing, especially in situations where servers show highly variable response times. When an argument <draws> is present, it must be an integer value one or greater, indicating the number of draws before selecting the least loaded of these servers. It was indeed demonstrated that picking the least loaded of two servers is enough to significantly improve the fairness of the algorithm, by always avoiding to pick the most loaded server within a farm and getting rid of any bias that could be induced by the unfair distribution of the consistent list. Higher values N will take away N-1 of the highest loaded servers at the expense of performance. With very high values, the algorithm will converge towards the leastconn's result but much slower. The default value is 2, which generally shows very good distribution and performance. This algorithm is also known as the Power of Two Random Choices and is described here : http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf rdp-cookie rdp-cookie(<name>) The RDP cookie <name> (or "mstshash" if omitted) will be looked up and hashed for each incoming TCP request. Just as with the equivalent ACL 'req.rdp_cookie()' function, the name is not case-sensitive. This mechanism is useful as a degraded persistence mode, as it makes it possible to always send the same user (or the same session ID) to the same server. If the cookie is not found, the normal roundrobin algorithm is used instead. Note that for this to work, the frontend must ensure that an RDP cookie is already present in the request buffer. For this you must use 'tcp-request content accept' rule combined with a 'req.rdp_cookie_cnt' ACL. This algorithm is static by default, which means that changing a server's weight on the fly will have no effect, but this can be changed using "[hash-type](#hash-type)". See also the "hash" option above. <arguments> is an optional list of arguments which may be needed by some algorithms. Right now, only "[url\_param](#url_param)" and "uri" support an optional argument. ``` ``` The load balancing algorithm of a backend is set to roundrobin when no other algorithm, mode nor option have been set. The algorithm may only be set once for each backend. With authentication schemes that require the same connection like NTLM, URI based algorithms must not be used, as they would cause subsequent requests to be routed to different backend servers, breaking the invalid assumptions NTLM relies on. ``` Examples : ``` balance roundrobin balance url_param userid balance url_param session_id check_post 64 balance hdr(User-Agent) balance hdr(host) balance hdr(Host) use_domain_only balance hash req.cookie(clientid) balance hash var(req.client_id) balance hash req.hdr_ip(x-forwarded-for,-1),ipmask(24) ``` ``` Note: the following caveats and limitations on using the "check_post" extension with "[url\_param](#url_param)" must be considered : - all POST requests are eligible for consideration, because there is no way to determine if the parameters will be found in the body or entity which may contain binary data. Therefore another method may be required to restrict consideration of POST requests that have no URL parameters in the body. (see acl http_end) - using a <max_wait> value larger than the request buffer size does not make sense and is useless. The buffer size is set at build time, and defaults to 16 kB. - Content-Encoding is not supported, the parameter search will probably fail; and load balancing will fall back to Round Robin. - Expect: 100-continue is not supported, load balancing will fall back to Round Robin. - Transfer-Encoding (RFC7230 3.3.1) is only supported in the first chunk. If the entire parameter value is not present in the first chunk, the selection of server is undefined (actually, defined by how little actually appeared in the first chunk). - This feature does not support generation of a 100, 411 or 501 response. - In some cases, requesting "check_post" MAY attempt to scan the entire contents of a message body. Scanning normally terminates when linear white space or control characters are found, indicating the end of what might be a URL parameter list. This is probably not a concern with SGML type message bodies. ``` **See also :** "[dispatch](#dispatch)", "cookie", "[transparent](#option%20transparent)", "[hash-type](#hash-type)". **bind** [<address>]:<port\_range> [, ...] [param\*] **bind** /<path> [, ...] [param\*] ``` Define one or several listening addresses and/or ports in a frontend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | no | Arguments : ``` <address> is optional and can be a host name, an IPv4 address, an IPv6 address, or '*'. It designates the address the frontend will listen on. If unset, all IPv4 addresses of the system will be listened on. The same will apply for '*' or the system's special address "0.0.0.0". The IPv6 equivalent is '::'. Note that if you bind a frontend to multiple UDP addresses you have no guarantee about the address which will be used to respond. This is why "0.0.0.0" addresses and lists of comma-separated IP addresses have been forbidden to bind QUIC addresses. Optionally, an address family prefix may be used before the address to force the family regardless of the address format, which can be useful to specify a path to a unix socket with no slash ('/'). Currently supported prefixes are : - 'ipv4@' -> address is always IPv4 - 'ipv6@' -> address is always IPv6 - 'udp@' -> address is resolved as IPv4 or IPv6 and protocol UDP is used. Currently those listeners are supported only in log-forward sections. - 'udp4@' -> address is always IPv4 and protocol UDP is used. Currently those listeners are supported only in log-forward sections. - 'udp6@' -> address is always IPv6 and protocol UDP is used. Currently those listeners are supported only in log-forward sections. - 'unix@' -> address is a path to a local unix socket - 'abns@' -> address is in abstract namespace (Linux only). - 'fd@<n>' -> use file descriptor <n> inherited from the parent. The fd must be bound and may or may not already be listening. - 'sockpair@<n>'-> like fd@ but you must use the fd of a connected unix socket or of a socketpair. The bind waits to receive a FD over the unix socket and uses it as if it was the FD of an accept(). Should be used carefully. - 'quic4@' -> address is resolved as IPv4 and protocol UDP is used. Note that by default QUIC connections attached to a listener will be multiplexed over the listener socket. With a large traffic this has a noticeable impact on performance and CPU consumption. To improve this, you can change default settings of "tune.quic.conn-owner" to connection or at least duplicate QUIC listener instances over several threads, for example using "shards" keyword. - 'quic6@' -> address is resolved as IPv6 and protocol UDP is used. The performance note for QUIC over IPv4 applies as well. You may want to reference some environment variables in the address parameter, see [section 2.3](#2.3) about environment variables. <port_range> is either a unique TCP port, or a port range for which the proxy will accept connections for the IP address specified above. The port is mandatory for TCP listeners. Note that in the case of an IPv6 address, the port is always the number after the last colon (':'). A range can either be : - a numerical port (ex: '80') - a dash-delimited ports range explicitly stating the lower and upper bounds (ex: '2000-2100') which are included in the range. Particular care must be taken against port ranges, because every <address:port> couple consumes one socket (= a file descriptor), so it's easy to consume lots of descriptors with a simple range, and to run out of sockets. Also, each <address:port> couple must be used only once among all instances running on a same system. Please note that binding to ports lower than 1024 generally require particular privileges to start the program, which are independent of the 'uid' parameter. <path> is a UNIX socket path beginning with a slash ('/'). This is alternative to the TCP listening port. HAProxy will then receive UNIX connections on the socket located at this place. The path must begin with a slash and by default is absolute. It can be relative to the prefix defined by "[unix-bind](#unix-bind)" in the global section. Note that the total length of the prefix followed by the socket path cannot exceed some system limits for UNIX sockets, which commonly are set to 107 characters. <param*> is a list of parameters common to all sockets declared on the same line. These numerous parameters depend on OS and build options and have a complete section dedicated to them. Please refer to [section 5](#5) to for more details. ``` ``` It is possible to specify a list of address:port combinations delimited by commas. The frontend will then listen on all of these addresses. There is no fixed limit to the number of addresses and ports which can be listened on in a frontend, as well as there is no limit to the number of "bind" statements in a frontend. ``` Example : ``` listen http_proxy bind :80,:443 bind 10.0.0.1:10080,10.0.0.1:10443 bind /var/run/ssl-frontend.sock user root mode 600 accept-proxy listen http_https_proxy bind :80 bind :443 ssl crt /etc/haproxy/site.pem listen http_https_proxy_explicit bind ipv6@:80 bind ipv4@public_ssl:443 ssl crt /etc/haproxy/site.pem bind [email protected] user root mode 600 accept-proxy listen external_bind_app1 bind "fd@${FD_APP1}" listen h3_quic_proxy bind [email protected]:8888 ssl crt /etc/mycrt alpn h3 ``` ``` Note: regarding Linux's abstract namespace sockets, HAProxy uses the whole sun_path length is used for the address length. Some other programs such as socat use the string length only by default. Pass the option ",unix-tightsocklen=0" to any abstract socket definition in socat to make it compatible with HAProxy's. ``` **See also :** "source", "[option forwardfor](#option%20forwardfor)", "[unix-bind](#unix-bind)" and the PROXY protocol documentation, and [section 5](#5) about bind options. **capture cookie** <name> len <length> ``` Capture and log a cookie in the request and in the response. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | no | Arguments : ``` <name> is the beginning of the name of the cookie to capture. In order to match the exact name, simply suffix the name with an equal sign ('='). The full name will appear in the logs, which is useful with application servers which adjust both the cookie name and value (e.g. ASPSESSIONXXX). <length> is the maximum number of characters to report in the logs, which include the cookie name, the equal sign and the value, all in the standard "name=value" form. The string will be truncated on the right if it exceeds <length>. ``` ``` Only the first cookie is captured. Both the "cookie" request headers and the "[set-cookie](#set-cookie)" response headers are monitored. This is particularly useful to check for application bugs causing session crossing or stealing between users, because generally the user's cookies can only change on a login page. When the cookie was not presented by the client, the associated log column will report "-". When a request does not cause a cookie to be assigned by the server, a "-" is reported in the response column. The capture is performed in the frontend only because it is necessary that the log format does not change for a given frontend depending on the backends. This may change in the future. Note that there can be only one "[capture cookie](#capture%20cookie)" statement in a frontend. The maximum capture length is set by the global "[tune.http.cookielen](#tune.http.cookielen)" setting and defaults to 63 characters. It is not possible to specify a capture in a "defaults" section. ``` Example: ``` capture cookie ASPSESSION len 32 ``` **See also :** "[capture request header](#capture%20request%20header)", "[capture response header](#capture%20response%20header)" as well as [section 8](#8) about logging. **capture request header** <name> len <length> ``` Capture and log the last occurrence of the specified request header. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | no | Arguments : ``` <name> is the name of the header to capture. The header names are not case-sensitive, but it is a common practice to write them as they appear in the requests, with the first letter of each word in upper case. The header name will not appear in the logs, only the value is reported, but the position in the logs is respected. <length> is the maximum number of characters to extract from the value and report in the logs. The string will be truncated on the right if it exceeds <length>. ``` ``` The complete value of the last occurrence of the header is captured. The value will be added to the logs between braces ('{}'). If multiple headers are captured, they will be delimited by a vertical bar ('|') and will appear in the same order they were declared in the configuration. Non-existent headers will be logged just as an empty string. Common uses for request header captures include the "Host" field in virtual hosting environments, the "Content-length" when uploads are supported, "User-agent" to quickly differentiate between real users and robots, and "X-Forwarded-For" in proxied environments to find where the request came from. Note that when capturing headers such as "User-agent", some spaces may be logged, making the log analysis more difficult. Thus be careful about what you log if you know your log parser is not smart enough to rely on the braces. There is no limit to the number of captured request headers nor to their length, though it is wise to keep them low to limit memory usage per session. In order to keep log format consistent for a same frontend, header captures can only be declared in a frontend. It is not possible to specify a capture in a "defaults" section. ``` Example: ``` capture request header Host len 15 capture request header X-Forwarded-For len 15 capture request header Referer len 15 ``` **See also :** "[capture cookie](#capture%20cookie)", "[capture response header](#capture%20response%20header)" as well as [section 8](#8) about logging. **capture response header** <name> len <length> ``` Capture and log the last occurrence of the specified response header. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | no | Arguments : ``` <name> is the name of the header to capture. The header names are not case-sensitive, but it is a common practice to write them as they appear in the response, with the first letter of each word in upper case. The header name will not appear in the logs, only the value is reported, but the position in the logs is respected. <length> is the maximum number of characters to extract from the value and report in the logs. The string will be truncated on the right if it exceeds <length>. ``` ``` The complete value of the last occurrence of the header is captured. The result will be added to the logs between braces ('{}') after the captured request headers. If multiple headers are captured, they will be delimited by a vertical bar ('|') and will appear in the same order they were declared in the configuration. Non-existent headers will be logged just as an empty string. Common uses for response header captures include the "Content-length" header which indicates how many bytes are expected to be returned, the "Location" header to track redirections. There is no limit to the number of captured response headers nor to their length, though it is wise to keep them low to limit memory usage per session. In order to keep log format consistent for a same frontend, header captures can only be declared in a frontend. It is not possible to specify a capture in a "defaults" section. ``` Example: ``` capture response header Content-length len 9 capture response header Location len 15 ``` **See also :** "[capture cookie](#capture%20cookie)", "[capture request header](#capture%20request%20header)" as well as [section 8](#8) about logging. **clitcpka-cnt** <count> ``` Sets the maximum number of keepalive probes TCP should send before dropping the connection on the client side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <count> is the maximum number of keepalive probes. ``` ``` This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used. The availability of this setting depends on the operating system. It is known to work on Linux. ``` **See also :** "[option clitcpka](#option%20clitcpka)", "[clitcpka-idle](#clitcpka-idle)", "[clitcpka-intvl](#clitcpka-intvl)". **clitcpka-idle** <timeout> ``` Sets the time the connection needs to remain idle before TCP starts sending keepalive probes, if enabled the sending of TCP keepalive packets on the client side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <timeout> is the time the connection needs to remain idle before TCP starts sending keepalive probes. It is specified in seconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword is not specified, system-wide TCP parameter (tcp_keepalive_time) is used. The availability of this setting depends on the operating system. It is known to work on Linux. ``` **See also :** "[option clitcpka](#option%20clitcpka)", "[clitcpka-cnt](#clitcpka-cnt)", "[clitcpka-intvl](#clitcpka-intvl)". **clitcpka-intvl** <timeout> ``` Sets the time between individual keepalive probes on the client side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <timeout> is the time between individual keepalive probes. It is specified in seconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used. The availability of this setting depends on the operating system. It is known to work on Linux. ``` **See also :** "[option clitcpka](#option%20clitcpka)", "[clitcpka-cnt](#clitcpka-cnt)", "[clitcpka-idle](#clitcpka-idle)". **compression algo** <algorithm> ... **compression type** <mime type> ... ``` Enable HTTP compression. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` algo is followed by the list of supported compression algorithms. type is followed by the list of MIME types that will be compressed. ``` ``` The currently supported algorithms are : identity this is mostly for debugging, and it was useful for developing the compression feature. Identity does not apply any change on data. gzip applies gzip compression. This setting is only available when support for zlib or libslz was built in. deflate same as "gzip", but with deflate algorithm and zlib format. Note that this algorithm has ambiguous support on many browsers and no support at all from recent ones. It is strongly recommended not to use it for anything else than experimentation. This setting is only available when support for zlib or libslz was built in. raw-deflate same as "deflate" without the zlib wrapper, and used as an alternative when the browser wants "deflate". All major browsers understand it and despite violating the standards, it is known to work better than "deflate", at least on MSIE and some versions of Safari. Do not use it in conjunction with "deflate", use either one or the other since both react to the same Accept-Encoding token. This setting is only available when support for zlib or libslz was built in. Compression will be activated depending on the Accept-Encoding request header. With identity, it does not take care of that header. If backend servers support HTTP compression, these directives will be no-op: HAProxy will see the compressed response and will not compress again. If backend servers do not support HTTP compression and there is Accept-Encoding header in request, HAProxy will compress the matching response. Compression is disabled when: * the request does not advertise a supported compression algorithm in the "Accept-Encoding" header * the response message is not HTTP/1.1 or above * HTTP status code is not one of 200, 201, 202, or 203 * response contain neither a "Content-Length" header nor a "Transfer-Encoding" whose last value is "chunked" * response contains a "Content-Type" header whose first value starts with "multipart" * the response contains the "no-transform" value in the "Cache-control" header * User-Agent matches "Mozilla/4" unless it is MSIE 6 with XP SP2, or MSIE 7 and later * The response contains a "Content-Encoding" header, indicating that the response is already compressed (see compression offload) * The response contains an invalid "ETag" header or multiple ETag headers Note: The compression does not emit the Warning header. ``` Examples : ``` compression algo gzip compression type text/html text/plain ``` **See also :** "[compression offload](#compression%20offload)" **compression offload** ``` Makes HAProxy work as a compression offloader only. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | yes | ``` The "offload" setting makes HAProxy remove the Accept-Encoding header to prevent backend servers from compressing responses. It is strongly recommended not to do this because this means that all the compression work will be done on the single point where HAProxy is located. However in some deployment scenarios, HAProxy may be installed in front of a buggy gateway with broken HTTP compression implementation which can't be turned off. In that case HAProxy can be used to prevent that gateway from emitting invalid payloads. In this case, simply removing the header in the configuration does not work because it applies before the header is parsed, so that prevents HAProxy from compressing. The "offload" setting should then be used for such scenarios. If this setting is used in a defaults section, a warning is emitted and the option is ignored. ``` **See also :** "[compression type](#compression%20type)", "[compression algo](#compression%20algo)" **cookie** <name> [ rewrite | insert | prefix ] [ indirect ] [ nocache ] [ postonly ] [ preserve ] [ httponly ] [ secure ] [ domain <domain> ]\* [ maxidle <idle> ] [ maxlife <life> ] [ dynamic ] [ attr <value> ]\* ``` Enable cookie-based persistence in a backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <name> is the name of the cookie which will be monitored, modified or inserted in order to bring persistence. This cookie is sent to the client via a "Set-Cookie" header in the response, and is brought back by the client in a "Cookie" header in all requests. Special care should be taken to choose a name which does not conflict with any likely application cookie. Also, if the same backends are subject to be used by the same clients (e.g. HTTP/HTTPS), care should be taken to use different cookie names between all backends if persistence between them is not desired. rewrite This keyword indicates that the cookie will be provided by the server and that HAProxy will have to modify its value to set the server's identifier in it. This mode is handy when the management of complex combinations of "Set-cookie" and "Cache-control" headers is left to the application. The application can then decide whether or not it is appropriate to emit a persistence cookie. Since all responses should be monitored, this mode doesn't work in HTTP tunnel mode. Unless the application behavior is very complex and/or broken, it is advised not to start with this mode for new deployments. This keyword is incompatible with "insert" and "prefix". insert This keyword indicates that the persistence cookie will have to be inserted by HAProxy in server responses if the client did not already have a cookie that would have permitted it to access this server. When used without the "preserve" option, if the server emits a cookie with the same name, it will be removed before processing. For this reason, this mode can be used to upgrade existing configurations running in the "rewrite" mode. The cookie will only be a session cookie and will not be stored on the client's disk. By default, unless the "indirect" option is added, the server will see the cookies emitted by the client. Due to caching effects, it is generally wise to add the "nocache" or "postonly" keywords (see below). The "insert" keyword is not compatible with "rewrite" and "prefix". prefix This keyword indicates that instead of relying on a dedicated cookie for the persistence, an existing one will be completed. This may be needed in some specific environments where the client does not support more than one single cookie and the application already needs it. In this case, whenever the server sets a cookie named <name>, it will be prefixed with the server's identifier and a delimiter. The prefix will be removed from all client requests so that the server still finds the cookie it emitted. Since all requests and responses are subject to being modified, this mode doesn't work with tunnel mode. The "prefix" keyword is not compatible with "rewrite" and "insert". Note: it is highly recommended not to use "indirect" with "prefix", otherwise server cookie updates would not be sent to clients. indirect When this option is specified, no cookie will be emitted to a client which already has a valid one for the server which has processed the request. If the server sets such a cookie itself, it will be removed, unless the "preserve" option is also set. In "insert" mode, this will additionally remove cookies from the requests transmitted to the server, making the persistence mechanism totally transparent from an application point of view. Note: it is highly recommended not to use "indirect" with "prefix", otherwise server cookie updates would not be sent to clients. nocache This option is recommended in conjunction with the insert mode when there is a cache between the client and HAProxy, as it ensures that a cacheable response will be tagged non-cacheable if a cookie needs to be inserted. This is important because if all persistence cookies are added on a cacheable home page for instance, then all customers will then fetch the page from an outer cache and will all share the same persistence cookie, leading to one server receiving much more traffic than others. See also the "insert" and "postonly" options. postonly This option ensures that cookie insertion will only be performed on responses to POST requests. It is an alternative to the "nocache" option, because POST responses are not cacheable, so this ensures that the persistence cookie will never get cached. Since most sites do not need any sort of persistence before the first POST which generally is a login request, this is a very efficient method to optimize caching without risking to find a persistence cookie in the cache. See also the "insert" and "nocache" options. preserve This option may only be used with "insert" and/or "indirect". It allows the server to emit the persistence cookie itself. In this case, if a cookie is found in the response, HAProxy will leave it untouched. This is useful in order to end persistence after a logout request for instance. For this, the server just has to emit a cookie with an invalid value (e.g. empty) or with a date in the past. By combining this mechanism with the "disable-on-404" check option, it is possible to perform a completely graceful shutdown because users will definitely leave the server after they logout. httponly This option tells HAProxy to add an "HttpOnly" cookie attribute when a cookie is inserted. This attribute is used so that a user agent doesn't share the cookie with non-HTTP components. Please check RFC6265 for more information on this attribute. secure This option tells HAProxy to add a "Secure" cookie attribute when a cookie is inserted. This attribute is used so that a user agent never emits this cookie over non-secure channels, which means that a cookie learned with this flag will be presented only over SSL/TLS connections. Please check RFC6265 for more information on this attribute. domain This option allows to specify the domain at which a cookie is inserted. It requires exactly one parameter: a valid domain name. If the domain begins with a dot, the browser is allowed to use it for any host ending with that name. It is also possible to specify several domain names by invoking this option multiple times. Some browsers might have small limits on the number of domains, so be careful when doing that. For the record, sending 10 domains to MSIE 6 or Firefox 2 works as expected. maxidle This option allows inserted cookies to be ignored after some idle time. It only works with insert-mode cookies. When a cookie is sent to the client, the date this cookie was emitted is sent too. Upon further presentations of this cookie, if the date is older than the delay indicated by the parameter (in seconds), it will be ignored. Otherwise, it will be refreshed if needed when the response is sent to the client. This is particularly useful to prevent users who never close their browsers from remaining for too long on the same server (e.g. after a farm size change). When this option is set and a cookie has no date, it is always accepted, but gets refreshed in the response. This maintains the ability for admins to access their sites. Cookies that have a date in the future further than 24 hours are ignored. Doing so lets admins fix timezone issues without risking kicking users off the site. maxlife This option allows inserted cookies to be ignored after some life time, whether they're in use or not. It only works with insert mode cookies. When a cookie is first sent to the client, the date this cookie was emitted is sent too. Upon further presentations of this cookie, if the date is older than the delay indicated by the parameter (in seconds), it will be ignored. If the cookie in the request has no date, it is accepted and a date will be set. Cookies that have a date in the future further than 24 hours are ignored. Doing so lets admins fix timezone issues without risking kicking users off the site. Contrary to maxidle, this value is not refreshed, only the first visit date counts. Both maxidle and maxlife may be used at the time. This is particularly useful to prevent users who never close their browsers from remaining for too long on the same server (e.g. after a farm size change). This is stronger than the maxidle method in that it forces a redispatch after some absolute delay. dynamic Activate dynamic cookies. When used, a session cookie is dynamically created for each server, based on the IP and port of the server, and a secret key, specified in the "[dynamic-cookie-key](#dynamic-cookie-key)" backend directive. The cookie will be regenerated each time the IP address change, and is only generated for IPv4/IPv6. attr This option tells HAProxy to add an extra attribute when a cookie is inserted. The attribute value can contain any characters except control ones or ";". This option may be repeated. ``` ``` There can be only one persistence cookie per HTTP backend, and it can be declared in a defaults section. The value of the cookie will be the value indicated after the "cookie" keyword in a "server" statement. If no cookie is declared for a given server, the cookie is not set. ``` Examples : ``` cookie JSESSIONID prefix cookie SRV insert indirect nocache cookie SRV insert postonly indirect cookie SRV insert indirect nocache maxidle 30m maxlife 8h ``` **See also :** "balance source", "[capture cookie](#capture%20cookie)", "server" and "[ignore-persist](#ignore-persist)". **declare capture** [ request | response ] len <length> ``` Declares a capture slot. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | no | Arguments: ``` <length> is the length allowed for the capture. ``` ``` This declaration is only available in the frontend or listen section, but the reserved slot can be used in the backends. The "request" keyword allocates a capture slot for use in the request, and "response" allocates a capture slot for use in the response. ``` **See also:** "[capture-req](#capture-req)", "[capture-res](#capture-res)" (sample converters), "[capture.req.hdr](#capture.req.hdr)", "[capture.res.hdr](#capture.res.hdr)" (sample fetches), "[http-request capture](#http-request%20capture)" and "[http-response capture](#http-response%20capture)". **default-server** [param\*] ``` Change default options for a server in a backend ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments: ``` <param*> is a list of parameters for this server. The "default-server" keyword accepts an important number of options and has a complete section dedicated to it. Please refer to [section 5](#5) for more details. ``` Example : ``` default-server inter 1000 weight 13 ``` **See also:** "server" and [section 5](#5) about server options **default\_backend** <backend> ``` Specify the backend to use when no "[use\_backend](#use_backend)" rule has been matched. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <backend> is the name of the backend to use. ``` ``` When doing content-switching between frontend and backends using the "[use\_backend](#use_backend)" keyword, it is often useful to indicate which backend will be used when no rule has matched. It generally is the dynamic backend which will catch all undetermined requests. ``` Example : ``` use_backend dynamic if url_dyn use_backend static if url_css url_img extension_img default_backend dynamic ``` **See also :** "[use\_backend](#use_backend)" **description** <string> ``` Describe a listen, frontend or backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | yes | Arguments : string ``` Allows to add a sentence to describe the related object in the HAProxy HTML stats page. The description will be printed on the right of the object name it describes. No need to backslash spaces in the <string> arguments. ``` **disabled** ``` Disable a proxy, frontend or backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` The "disabled" keyword is used to disable an instance, mainly in order to liberate a listening port or to temporarily disable a service. The instance will still be created and its configuration will be checked, but it will be created in the "stopped" state and will appear as such in the statistics. It will not receive any traffic nor will it send any health-checks or logs. It is possible to disable many instances at once by adding the "disabled" keyword in a "defaults" section. ``` **See also :** "enabled" **dispatch** <address>:<port> ``` Set a default server address ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments : ``` <address> is the IPv4 address of the default server. Alternatively, a resolvable hostname is supported, but this name will be resolved during start-up. <ports> is a mandatory port specification. All connections will be sent to this port, and it is not permitted to use port offsets as is possible with normal servers. ``` ``` The "[dispatch](#dispatch)" keyword designates a default server for use when no other server can take the connection. In the past it was used to forward non persistent connections to an auxiliary load balancer. Due to its simple syntax, it has also been used for simple TCP relays. It is recommended not to use it for more clarity, and to use the "server" directive instead. ``` **See also :** "server" **dynamic-cookie-key** <string> ``` Set the dynamic cookie secret key for a backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : The secret key to be used. ``` When dynamic cookies are enabled (see the "dynamic" directive for cookie), a dynamic cookie is created for each server (unless one is explicitly specified on the "server" line), using a hash of the IP address of the server, the TCP port, and the secret key. That way, we can ensure session persistence across multiple load-balancers, even if servers are dynamically added or removed. ``` **enabled** ``` Enable a proxy, frontend or backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` The "enabled" keyword is used to explicitly enable an instance, when the defaults has been set to "disabled". This is very rarely used. ``` **See also :** "disabled" **errorfile** <code> <file> ``` Return a file contents instead of errors generated by HAProxy ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <code> is the HTTP status code. Currently, HAProxy is capable of generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504. <file> designates a file containing the full HTTP response. It is recommended to follow the common practice of appending ".http" to the filename so that people do not confuse the response with HTML error pages, and to use absolute paths, since files are read before any chroot is performed. ``` ``` It is important to understand that this keyword is not meant to rewrite errors returned by the server, but errors detected and returned by HAProxy. This is why the list of supported errors is limited to a small set. Code 200 is emitted in response to requests matching a "[monitor-uri](#monitor-uri)" rule. The files are parsed when HAProxy starts and must be valid according to the HTTP specification. They should not exceed the configured buffer size (BUFSIZE), which generally is 16 kB, otherwise an internal error will be returned. It is also wise not to put any reference to local contents (e.g. images) in order to avoid loops between the client and HAProxy when all servers are down, causing an error to be returned instead of an image. Finally, The response cannot exceed (tune.bufsize - tune.maxrewrite) so that "[http-after-response](#http-after-response)" rules still have room to operate (see "[tune.maxrewrite](#tune.maxrewrite)"). The files are read at the same time as the configuration and kept in memory. For this reason, the errors continue to be returned even when the process is chrooted, and no file change is considered while the process is running. A simple method for developing those files consists in associating them to the 403 status code and interrogating a blocked URL. ``` **See also :** "[http-error](#http-error)", "[errorloc](#errorloc)", "[errorloc302](#errorloc302)", "[errorloc303](#errorloc303)" Example : ``` errorfile 400 /etc/haproxy/errorfiles/400badreq.http errorfile 408 /dev/null # work around Chrome pre-connect bug errorfile 403 /etc/haproxy/errorfiles/403forbid.http errorfile 503 /etc/haproxy/errorfiles/503sorry.http ``` **errorfiles** <name> [<code> ...] ``` Import, fully or partially, the error files defined in the <name> http-errors section. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <name> is the name of an existing http-errors section. <code> is a HTTP status code. Several status code may be listed. Currently, HAProxy is capable of generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504. ``` ``` Errors defined in the http-errors section with the name <name> are imported in the current proxy. If no status code is specified, all error files of the http-errors section are imported. Otherwise, only error files associated to the listed status code are imported. Those error files override the already defined custom errors for the proxy. And they may be overridden by following ones. Functionally, it is exactly the same as declaring all error files by hand using "errorfile" directives. ``` **See also :** "[http-error](#http-error)", "errorfile", "[errorloc](#errorloc)", "[errorloc302](#errorloc302)" , "[errorloc303](#errorloc303)" and [section 3.8](#3.8) about http-errors. Example : ``` errorfiles generic errorfiles site-1 403 404 ``` **errorloc** <code> <url> **errorloc302** <code> <url> ``` Return an HTTP redirection to a URL instead of errors generated by HAProxy ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <code> is the HTTP status code. Currently, HAProxy is capable of generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504. <url> it is the exact contents of the "Location" header. It may contain either a relative URI to an error page hosted on the same site, or an absolute URI designating an error page on another site. Special care should be given to relative URIs to avoid redirect loops if the URI itself may generate the same error (e.g. 500). ``` ``` It is important to understand that this keyword is not meant to rewrite errors returned by the server, but errors detected and returned by HAProxy. This is why the list of supported errors is limited to a small set. Code 200 is emitted in response to requests matching a "[monitor-uri](#monitor-uri)" rule. Note that both keyword return the HTTP 302 status code, which tells the client to fetch the designated URL using the same HTTP method. This can be quite problematic in case of non-GET methods such as POST, because the URL sent to the client might not be allowed for something other than GET. To work around this problem, please use "[errorloc303](#errorloc303)" which send the HTTP 303 status code, indicating to the client that the URL must be fetched with a GET request. ``` **See also :** "[http-error](#http-error)", "errorfile", "[errorloc303](#errorloc303)" **errorloc303** <code> <url> ``` Return an HTTP redirection to a URL instead of errors generated by HAProxy ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <code> is the HTTP status code. Currently, HAProxy is capable of generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504. <url> it is the exact contents of the "Location" header. It may contain either a relative URI to an error page hosted on the same site, or an absolute URI designating an error page on another site. Special care should be given to relative URIs to avoid redirect loops if the URI itself may generate the same error (e.g. 500). ``` ``` It is important to understand that this keyword is not meant to rewrite errors returned by the server, but errors detected and returned by HAProxy. This is why the list of supported errors is limited to a small set. Code 200 is emitted in response to requests matching a "[monitor-uri](#monitor-uri)" rule. Note that both keyword return the HTTP 303 status code, which tells the client to fetch the designated URL using the same HTTP GET method. This solves the usual problems associated with "[errorloc](#errorloc)" and the 302 code. It is possible that some very old browsers designed before HTTP/1.1 do not support it, but no such problem has been reported till now. ``` **See also :** "[http-error](#http-error)", "errorfile", "[errorloc](#errorloc)", "[errorloc302](#errorloc302)" **email-alert from** <emailaddr> ``` Declare the from email address to be used in both the envelope and header of email alerts. This is the address that email alerts are sent from. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <emailaddr> is the from email address to use when sending email alerts ``` ``` Also requires "[email-alert mailers](#email-alert%20mailers)" and "[email-alert to](#email-alert%20to)" to be set and if so sending email alerts is enabled for the proxy. ``` **See also :** "[email-alert level](#email-alert%20level)", "[email-alert mailers](#email-alert%20mailers)", "[email-alert myhostname](#email-alert%20myhostname)", "[email-alert to](#email-alert%20to)", [section 3.6](#3.6) about mailers. **email-alert level** <level> ``` Declare the maximum log level of messages for which email alerts will be sent. This acts as a filter on the sending of email alerts. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <level> One of the 8 syslog levels: emerg alert crit err warning notice info debug The above syslog levels are ordered from lowest to highest. ``` ``` By default level is alert Also requires "[email-alert from](#email-alert%20from)", "[email-alert mailers](#email-alert%20mailers)" and "[email-alert to](#email-alert%20to)" to be set and if so sending email alerts is enabled for the proxy. Alerts are sent when : * An un-paused server is marked as down and <level> is alert or lower * A paused server is marked as down and <level> is notice or lower * A server is marked as up or enters the drain state and <level> is notice or lower * "[option log-health-checks](#option%20log-health-checks)" is enabled, <level> is info or lower, and a health check status update occurs ``` **See also :** "[email-alert from](#email-alert%20from)", "[email-alert mailers](#email-alert%20mailers)", "[email-alert myhostname](#email-alert%20myhostname)", "[email-alert to](#email-alert%20to)", [section 3.6](#3.6) about mailers. **email-alert mailers** <mailersect> ``` Declare the mailers to be used when sending email alerts ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <mailersect> is the name of the mailers section to send email alerts. ``` ``` Also requires "[email-alert from](#email-alert%20from)" and "[email-alert to](#email-alert%20to)" to be set and if so sending email alerts is enabled for the proxy. ``` **See also :** "[email-alert from](#email-alert%20from)", "[email-alert level](#email-alert%20level)", "[email-alert myhostname](#email-alert%20myhostname)", "[email-alert to](#email-alert%20to)", [section 3.6](#3.6) about mailers. **email-alert myhostname** <hostname> ``` Declare the to hostname address to be used when communicating with mailers. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <hostname> is the hostname to use when communicating with mailers ``` ``` By default the systems hostname is used. Also requires "[email-alert from](#email-alert%20from)", "[email-alert mailers](#email-alert%20mailers)" and "[email-alert to](#email-alert%20to)" to be set and if so sending email alerts is enabled for the proxy. ``` **See also :** "[email-alert from](#email-alert%20from)", "[email-alert level](#email-alert%20level)", "[email-alert mailers](#email-alert%20mailers)", "[email-alert to](#email-alert%20to)", [section 3.6](#3.6) about mailers. **email-alert to** <emailaddr> ``` Declare both the recipient address in the envelope and to address in the header of email alerts. This is the address that email alerts are sent to. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <emailaddr> is the to email address to use when sending email alerts ``` ``` Also requires "[email-alert mailers](#email-alert%20mailers)" and "[email-alert to](#email-alert%20to)" to be set and if so sending email alerts is enabled for the proxy. ``` **See also :** "[email-alert from](#email-alert%20from)", "[email-alert level](#email-alert%20level)", "[email-alert mailers](#email-alert%20mailers)", "[email-alert myhostname](#email-alert%20myhostname)", [section 3.6](#3.6) about mailers. **error-log-format** <string> ``` Specifies the log format string to use in case of connection error on the frontend side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | ``` This directive specifies the log format string that will be used for logs containing information related to errors, timeouts, retries redispatches or HTTP status code 5xx. This format will in short be used for every log line that would be concerned by the "[log-separate-errors](#option%20log-separate-errors)" option, including connection errors described in [section 8.2.5](#8.2.5). If the directive is used in a defaults section, all subsequent frontends will use the same log format. Please see [section 8.2.4](#8.2.4) which covers the log format string in depth. "[error-log-format](#error-log-format)" directive overrides previous "[error-log-format](#error-log-format)" directives. ``` **force-persist** { if | unless } <condition> ``` Declare a condition to force persistence on down servers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | ``` By default, requests are not dispatched to down servers. It is possible to force this using "[option persist](#option%20persist)", but it is unconditional and redispatches to a valid server if "[option redispatch](#option%20redispatch)" is set. That leaves with very little possibilities to force some requests to reach a server which is artificially marked down for maintenance operations. The "[force-persist](#force-persist)" statement allows one to declare various ACL-based conditions which, when met, will cause a request to ignore the down status of a server and still try to connect to it. That makes it possible to start a server, still replying an error to the health checks, and run a specially configured browser to test the service. Among the handy methods, one could use a specific source IP address, or a specific cookie. The cookie also has the advantage that it can easily be added/removed on the browser from a test page. Once the service is validated, it is then possible to open the service to the world by returning a valid response to health checks. The forced persistence is enabled when an "if" condition is met, or unless an "unless" condition is met. The final redispatch is always disabled when this is used. ``` **See also :** "[option redispatch](#option%20redispatch)", "[ignore-persist](#ignore-persist)", "[persist](#option%20persist)", and [section 7](#7) about ACL usage. **filter** <name> [param\*] ``` Add the filter <name> in the filter list attached to the proxy. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | yes | Arguments : ``` <name> is the name of the filter. Officially supported filters are referenced in [section 9](#9). <param*> is a list of parameters accepted by the filter <name>. The parsing of these parameters are the responsibility of the filter. Please refer to the documentation of the corresponding filter ([section 9](#9)) for all details on the supported parameters. ``` ``` Multiple occurrences of the filter line can be used for the same proxy. The same filter can be referenced many times if needed. ``` Example: ``` listen bind *:80 filter trace name BEFORE-HTTP-COMP filter compression filter trace name AFTER-HTTP-COMP compression algo gzip compression offload server srv1 192.168.0.1:80 ``` **See also :** [section 9](#9). **fullconn** <conns> ``` Specify at what backend load the servers will reach their maxconn ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <conns> is the number of connections on the backend which will make the servers use the maximal number of connections. ``` ``` When a server has a "maxconn" parameter specified, it means that its number of concurrent connections will never go higher. Additionally, if it has a "[minconn](#minconn)" parameter, it indicates a dynamic limit following the backend's load. The server will then always accept at least <minconn> connections, never more than <maxconn>, and the limit will be on the ramp between both values when the backend has less than <conns> concurrent connections. This makes it possible to limit the load on the servers during normal loads, but push it further for important loads without overloading the servers during exceptional loads. Since it's hard to get this value right, HAProxy automatically sets it to 10% of the sum of the maxconns of all frontends that may branch to this backend (based on "[use\_backend](#use_backend)" and "[default\_backend](#default_backend)" rules). That way it's safe to leave it unset. However, "[use\_backend](#use_backend)" involving dynamic names are not counted since there is no way to know if they could match or not. ``` Example : ``` # The servers will accept between 100 and 1000 concurrent connections each # and the maximum of 1000 will be reached when the backend reaches 10000 # connections. backend dynamic fullconn 10000 server srv1 dyn1:80 minconn 100 maxconn 1000 server srv2 dyn2:80 minconn 100 maxconn 1000 ``` **See also :** "maxconn", "server" **hash-balance-factor** <factor> ``` Specify the balancing factor for bounded-load consistent hashing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | no | yes | Arguments : ``` <factor> is the control for the maximum number of concurrent requests to send to a server, expressed as a percentage of the average number of concurrent requests across all of the active servers. ``` ``` Specifying a "[hash-balance-factor](#hash-balance-factor)" for a server with "hash-type consistent" enables an algorithm that prevents any one server from getting too many requests at once, even if some hash buckets receive many more requests than others. Setting <factor> to 0 (the default) disables the feature. Otherwise, <factor> is a percentage greater than 100. For example, if <factor> is 150, then no server will be allowed to have a load more than 1.5 times the average. If server weights are used, they will be respected. If the first-choice server is disqualified, the algorithm will choose another server based on the request hash, until a server with additional capacity is found. A higher <factor> allows more imbalance between the servers, while a lower <factor> means that more servers will be checked on average, affecting performance. Reasonable values are from 125 to 200. This setting is also used by "balance random" which internally relies on the consistent hashing mechanism. ``` **See also :** "[balance](#balance)" and "[hash-type](#hash-type)". **hash-type** <method> <function> <modifier> ``` Specify a method to use for mapping hashes to servers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <method> is the method used to select a server from the hash computed by the <function> : map-based the hash table is a static array containing all alive servers. The hashes will be very smooth, will consider weights, but will be static in that weight changes while a server is up will be ignored. This means that there will be no slow start. Also, since a server is selected by its position in the array, most mappings are changed when the server count changes. This means that when a server goes up or down, or when a server is added to a farm, most connections will be redistributed to different servers. This can be inconvenient with caches for instance. consistent the hash table is a tree filled with many occurrences of each server. The hash key is looked up in the tree and the closest server is chosen. This hash is dynamic, it supports changing weights while the servers are up, so it is compatible with the slow start feature. It has the advantage that when a server goes up or down, only its associations are moved. When a server is added to the farm, only a few part of the mappings are redistributed, making it an ideal method for caches. However, due to its principle, the distribution will never be very smooth and it may sometimes be necessary to adjust a server's weight or its ID to get a more balanced distribution. In order to get the same distribution on multiple load balancers, it is important that all servers have the exact same IDs. Note: consistent hash uses sdbm and avalanche if no hash function is specified. <function> is the hash function to be used : sdbm this function was created initially for sdbm (a public-domain reimplementation of ndbm) database library. It was found to do well in scrambling bits, causing better distribution of the keys and fewer splits. It also happens to be a good general hashing function with good distribution, unless the total server weight is a multiple of 64, in which case applying the avalanche modifier may help. djb2 this function was first proposed by Dan Bernstein many years ago on comp.lang.c. Studies have shown that for certain workload this function provides a better distribution than sdbm. It generally works well with text-based inputs though it can perform extremely poorly with numeric-only input or when the total server weight is a multiple of 33, unless the avalanche modifier is also used. wt6 this function was designed for HAProxy while testing other functions in the past. It is not as smooth as the other ones, but is much less sensible to the input data set or to the number of servers. It can make sense as an alternative to sdbm+avalanche or djb2+avalanche for consistent hashing or when hashing on numeric data such as a source IP address or a visitor identifier in a URL parameter. crc32 this is the most common CRC32 implementation as used in Ethernet, gzip, PNG, etc. It is slower than the other ones but may provide a better distribution or less predictable results especially when used on strings. <modifier> indicates an optional method applied after hashing the key : avalanche This directive indicates that the result from the hash function above should not be used in its raw form but that a 4-byte full avalanche hash must be applied first. The purpose of this step is to mix the resulting bits from the previous hash in order to avoid any undesired effect when the input contains some limited values or when the number of servers is a multiple of one of the hash's components (64 for SDBM, 33 for DJB2). Enabling avalanche tends to make the result less predictable, but it's also not as smooth as when using the original function. Some testing might be needed with some workloads. This hash is one of the many proposed by Bob Jenkins. ``` ``` The default hash type is "map-based" and is recommended for most usages. The default function is "[sdbm](#sdbm)", the selection of a function should be based on the range of the values being hashed. ``` **See also :** "[balance](#balance)", "[hash-balance-factor](#hash-balance-factor)", "server" **http-after-response** <action> <options...> [ { if | unless } <condition> ] ``` Access control for all Layer 7 responses (server, applet/service and internal ones). ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | yes | ``` The http-after-response statement defines a set of rules which apply to layer 7 processing. The rules are evaluated in their declaration order when they are met in a frontend, listen or backend section. Any rule may optionally be followed by an ACL-based condition, in which case it will only be evaluated if the condition is true. Since these rules apply on responses, the backend rules are applied first, followed by the frontend's rules. Unlike http-response rules, these ones are applied on all responses, the server ones but also to all responses generated by HAProxy. These rules are evaluated at the end of the responses analysis, before the data forwarding. The first keyword is the rule's action. Several types of actions are supported: - add-header <name> <fmt> - allow - capture <sample> id <id> - del-header <name> [ -m <meth> ] - replace-header <name> <regex-match> <replace-fmt> - replace-value <name> <regex-match> <replace-fmt> - set-header <name> <fmt> - set-status <status> [reason <str>] - set-var(<var-name>[,<cond> ...]) <expr> - set-var-fmt(<var-name>[,<cond> ...]) <fmt> - strict-mode { on | off } - unset-var(<var-name>) The supported actions are described below. There is no limit to the number of http-after-response statements per instance. This directive is only available from named defaults sections, not anonymous ones. Rules defined in the defaults section are evaluated before ones in the associated proxy section. To avoid ambiguities, in this case the same defaults section cannot be used by proxies with the frontend capability and by proxies with the backend capability. It means a listen section cannot use a defaults section defining such rules. Note: Errors emitted in early stage of the request parsing are handled by the multiplexer at a lower level, before any http analysis. Thus no http-after-response ruleset is evaluated on these errors. ``` Example: ``` http-after-response set-header Strict-Transport-Security "max-age=31536000" http-after-response set-header Cache-Control "no-store,no-cache,private" http-after-response set-header Pragma "no-cache" ``` **http-after-response add-header** <name> <fmt> [ { if | unless } <condition> ] ``` This appends an HTTP header field whose name is specified in <name> and whose value is defined by <fmt>. Please refer to "[http-request add-header](#http-request%20add-header)" for a complete description. ``` **http-after-response capture** <sample> id <id> [ { if | unless } <condition> ] ``` This captures sample expression <sample> from the response buffer, and converts it to a string. Please refer to "[http-response capture](#http-response%20capture)" for a complete description. ``` **http-after-response allow** [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and lets the response pass the check. No further "[http-after-response](#http-after-response)" rules are evaluated for the current section. ``` **http-after-response del-header** <name> [ -m <meth> ] [ { if | unless } <condition> ] ``` This removes all HTTP header fields whose name is specified in <name>. Please refer to "[http-request del-header](#http-request%20del-header)" for a complete description. ``` **http-after-response replace-header** <name> <regex-match> <replace-fmt> [ { if | unless } <condition> ] ``` This works like "[http-response replace-header](#http-response%20replace-header)". ``` Example: ``` http-after-response replace-header Set-Cookie (C=[^;]*);(.*) \1;ip=%bi;\2 # applied to: Set-Cookie: C=1; expires=Tue, 14-Jun-2016 01:40:45 GMT # outputs: Set-Cookie: C=1;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT # assuming the backend IP is 192.168.1.20. ``` **http-after-response replace-value** <name> <regex-match> <replace-fmt> [ { if | unless } <condition> ] ``` This works like "[http-response replace-value](#http-response%20replace-value)". ``` Example: ``` http-after-response replace-value Cache-control ^public$ private # applied to: Cache-Control: max-age=3600, public # outputs: Cache-Control: max-age=3600, private ``` **http-after-response set-header** <name> <fmt> [ { if | unless } <condition> ] ``` This does the same as "[http-after-response add-header](#http-after-response%20add-header)" except that the header name is first removed if it existed. This is useful when passing security information to the server, where the header must not be manipulated by external users. ``` **http-after-response set-status** <status> [reason <str>] [ { if | unless } <condition> ] ``` This replaces the response status code with <status> which must be an integer between 100 and 999. Please refer to "[http-response set-status](#http-response%20set-status)" for a complete description. http-after-response set-var(<var-name>[,<cond> ...]) <expr> [ { if | unless } <condition> ] http-after-response set-var-fmt(<var-name>[,<cond> ...]) <fmt> [ { if | unless } <condition> ] This is used to set the contents of a variable. The variable is declared inline. Please refer to "http-request set-var" and "http-request set-var-fmt" for a complete description. ``` **http-after-response strict-mode** { on | off } [ { if | unless } <condition> ] ``` This enables or disables the strict rewriting mode for following rules. Please refer to "[http-request strict-mode](#http-request%20strict-mode)" for a complete description. ``` **http-after-response unset-var**(<var-name>) [ { if | unless } <condition> ] ``` This is used to unset a variable. See "http-request set-var" for details about <var-name>. ``` **http-check comment** <string> ``` Defines a comment for the following the http-check rule, reported in logs if it fails. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <string> is the comment message to add in logs if the following http-check rule fails. ``` ``` It only works for connect, send and expect rules. It is useful to make user-friendly error reporting. ``` **See also :** "[option httpchk](#option%20httpchk)", "[http-check connect](#http-check%20connect)", "[http-check send](#http-check%20send)" and "[http-check expect](#http-check%20expect)". **http-check connect** [default] [port <expr>] [addr <ip>] [send-proxy] [via-socks4] [ssl] [sni <sni>] [alpn <alpn>] [linger] [proto <name>] [comment <msg>] ``` Opens a new connection to perform an HTTP health check ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` comment <msg> defines a message to report if the rule evaluation fails. default Use default options of the server line to do the health checks. The server options are used only if not redefined. port <expr> if not set, check port or server port is used. It tells HAProxy where to open the connection to. <port> must be a valid TCP port source integer, from 1 to 65535 or an sample-fetch expression. addr <ip> defines the IP address to do the health check. send-proxy send a PROXY protocol string via-socks4 enables outgoing health checks using upstream socks4 proxy. ssl opens a ciphered connection sni <sni> specifies the SNI to use to do health checks over SSL. alpn <alpn> defines which protocols to advertise with ALPN. The protocol list consists in a comma-delimited list of protocol names, for instance: "h2,http/1.1". If it is not set, the server ALPN is used. proto <name> forces the multiplexer's protocol to use for this connection. It must be an HTTP mux protocol and it must be usable on the backend side. The list of available protocols is reported in haproxy -vv. linger cleanly close the connection instead of using a single RST. ``` ``` Just like tcp-check health checks, it is possible to configure the connection to use to perform HTTP health check. This directive should also be used to describe a scenario involving several request/response exchanges, possibly on different ports or with different servers. When there are no TCP port configured on the server line neither server port directive, then the first step of the http-check sequence must be to specify the port with a "[http-check connect](#http-check%20connect)". In an http-check ruleset a 'connect' is required, it is also mandatory to start the ruleset with a 'connect' rule. Purpose is to ensure admin know what they do. When a connect must start the ruleset, if may still be preceded by set-var, unset-var or comment rules. ``` Examples : ``` # check HTTP and HTTPs services on a server. # first open port 80 thanks to server line port directive, then # tcp-check opens port 443, ciphered and run a request on it: option httpchk http-check connect http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu http-check expect status 200-399 http-check connect port 443 ssl sni haproxy.1wt.eu http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu http-check expect status 200-399 server www 10.0.0.1 check port 80 ``` **See also :** "[option httpchk](#option%20httpchk)", "[http-check send](#http-check%20send)", "[http-check expect](#http-check%20expect)" **http-check disable-on-404** ``` Enable a maintenance mode upon HTTP/404 response to health-checks ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` When this option is set, a server which returns an HTTP code 404 will be excluded from further load-balancing, but will still receive persistent connections. This provides a very convenient method for Web administrators to perform a graceful shutdown of their servers. It is also important to note that a server which is detected as failed while it was in this mode will not generate an alert, just a notice. If the server responds 2xx or 3xx again, it will immediately be reinserted into the farm. The status on the stats page reports "NOLB" for a server in this mode. It is important to note that this option only works in conjunction with the "[httpchk](#option%20httpchk)" option. If this option is used with "[http-check expect](#http-check%20expect)", then it has precedence over it so that 404 responses will still be considered as soft-stop. Note also that a stopped server will stay stopped even if it replies 404s. This option is only evaluated for running servers. ``` **See also :** "[option httpchk](#option%20httpchk)" and "[http-check expect](#http-check%20expect)". **http-check expect** [min-recv <int>] [comment <msg>] [ok-status <st>] [error-status <st>] [tout-status <st>] [on-success <fmt>] [on-error <fmt>] [status-code <expr>] [!] <match> <pattern> ``` Make HTTP health checks consider response contents or specific status codes ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` comment <msg> defines a message to report if the rule evaluation fails. min-recv is optional and can define the minimum amount of data required to evaluate the current expect rule. If the number of received bytes is under this limit, the check will wait for more data. This option can be used to resolve some ambiguous matching rules or to avoid executing costly regex matches on content known to be still incomplete. If an exact string is used, the minimum between the string length and this parameter is used. This parameter is ignored if it is set to -1. If the expect rule does not match, the check will wait for more data. If set to 0, the evaluation result is always conclusive. ok-status <st> is optional and can be used to set the check status if the expect rule is successfully evaluated and if it is the last rule in the tcp-check ruleset. "L7OK", "L7OKC", "L6OK" and "L4OK" are supported : - L7OK : check passed on layer 7 - L7OKC : check conditionally passed on layer 7, set server to NOLB state. - L6OK : check passed on layer 6 - L4OK : check passed on layer 4 By default "L7OK" is used. error-status <st> is optional and can be used to set the check status if an error occurred during the expect rule evaluation. "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are supported : - L7OKC : check conditionally passed on layer 7, set server to NOLB state. - L7RSP : layer 7 invalid response - protocol error - L7STS : layer 7 response error, for example HTTP 5xx - L6RSP : layer 6 invalid response - protocol error - L4CON : layer 1-4 connection problem By default "L7RSP" is used. tout-status <st> is optional and can be used to set the check status if a timeout occurred during the expect rule evaluation. "L7TOUT", "L6TOUT", and "L4TOUT" are supported : - L7TOUT : layer 7 (HTTP/SMTP) timeout - L6TOUT : layer 6 (SSL) timeout - L4TOUT : layer 1-4 timeout By default "L7TOUT" is used. on-success <fmt> is optional and can be used to customize the informational message reported in logs if the expect rule is successfully evaluated and if it is the last rule in the tcp-check ruleset. <fmt> is a log-format string. on-error <fmt> is optional and can be used to customize the informational message reported in logs if an error occurred during the expect rule evaluation. <fmt> is a log-format string. <match> is a keyword indicating how to look for a specific pattern in the response. The keyword may be one of "[status](#status)", "rstatus", "[hdr](#hdr)", "fhdr", "string", or "rstring". The keyword may be preceded by an exclamation mark ("!") to negate the match. Spaces are allowed between the exclamation mark and the keyword. See below for more details on the supported keywords. <pattern> is the pattern to look for. It may be a string, a regular expression or a more complex pattern with several arguments. If the string pattern contains spaces, they must be escaped with the usual backslash ('\'). ``` ``` By default, "[option httpchk](#option%20httpchk)" considers that response statuses 2xx and 3xx are valid, and that others are invalid. When "[http-check expect](#http-check%20expect)" is used, it defines what is considered valid or invalid. Only one "[http-check](#http-check)" statement is supported in a backend. If a server fails to respond or times out, the check obviously fails. The available matches are : status <codes> : test the status codes found parsing <codes> string. it must be a comma-separated list of status codes or range codes. A health check response will be considered as valid if the response's status code matches any status code or is inside any range of the list. If the "[status](#status)" keyword is prefixed with "!", then the response will be considered invalid if the status code matches. rstatus <regex> : test a regular expression for the HTTP status code. A health check response will be considered valid if the response's status code matches the expression. If the "rstatus" keyword is prefixed with "!", then the response will be considered invalid if the status code matches. This is mostly used to check for multiple codes. hdr { name | name-lf } [ -m <meth> ] <name> [ { value | value-lf } [ -m <meth> ] <value> : test the specified header pattern on the HTTP response headers. The name pattern is mandatory but the value pattern is optional. If not specified, only the header presence is verified. <meth> is the matching method, applied on the header name or the header value. Supported matching methods are "[str](#str)" (exact match), "beg" (prefix match), "end" (suffix match), "[sub](#sub)" (substring match) or "reg" (regex match). If not specified, exact matching method is used. If the "name-lf" parameter is used, <name> is evaluated as a log-format string. If "value-lf" parameter is used, <value> is evaluated as a log-format string. These parameters cannot be used with the regex matching method. Finally, the header value is considered as comma-separated list. Note that matchings are case insensitive on the header names. fhdr { name | name-lf } [ -m <meth> ] <name> [ { value | value-lf } [ -m <meth> ] <value> : test the specified full header pattern on the HTTP response headers. It does exactly the same than "[hdr](#hdr)" keyword, except the full header value is tested, commas are not considered as delimiters. string <string> : test the exact string match in the HTTP response body. A health check response will be considered valid if the response's body contains this exact string. If the "string" keyword is prefixed with "!", then the response will be considered invalid if the body contains this string. This can be used to look for a mandatory word at the end of a dynamic page, or to detect a failure when a specific error appears on the check page (e.g. a stack trace). rstring <regex> : test a regular expression on the HTTP response body. A health check response will be considered valid if the response's body matches this expression. If the "rstring" keyword is prefixed with "!", then the response will be considered invalid if the body matches the expression. This can be used to look for a mandatory word at the end of a dynamic page, or to detect a failure when a specific error appears on the check page (e.g. a stack trace). string-lf <fmt> : test a log-format string match in the HTTP response body. A health check response will be considered valid if the response's body contains the string resulting of the evaluation of <fmt>, which follows the log-format rules. If prefixed with "!", then the response will be considered invalid if the body contains the string. It is important to note that the responses will be limited to a certain size defined by the global "[tune.bufsize](#tune.bufsize)" option, which defaults to 16384 bytes. Thus, too large responses may not contain the mandatory pattern when using "string" or "rstring". If a large response is absolutely required, it is possible to change the default max size by setting the global variable. However, it is worth keeping in mind that parsing very large responses can waste some CPU cycles, especially when regular expressions are used, and that it is always better to focus the checks on smaller resources. In an http-check ruleset, the last expect rule may be implicit. If no expect rule is specified after the last "[http-check send](#http-check%20send)", an implicit expect rule is defined to match on 2xx or 3xx status codes. It means this rule is also defined if there is no "[http-check](#http-check)" rule at all, when only "[option httpchk](#option%20httpchk)" is set. Last, if "[http-check expect](#http-check%20expect)" is combined with "[http-check disable-on-404](#http-check%20disable-on-404)", then this last one has precedence when the server responds with 404. ``` Examples : ``` # only accept status 200 as valid http-check expect status 200,201,300-310 # be sure a sessid coookie is set http-check expect header name "[set-cookie](#set-cookie)" value -m beg "sessid=" # consider SQL errors as errors http-check expect ! string SQL\ Error # consider status 5xx only as errors http-check expect ! rstatus ^5 # check that we have a correct hexadecimal tag before /html http-check expect rstring <!--tag:[0-9a-f]*--></html> ``` **See also :** "[option httpchk](#option%20httpchk)", "[http-check connect](#http-check%20connect)", "[http-check disable-on-404](#http-check%20disable-on-404)" and "[http-check send](#http-check%20send)". **http-check send** [meth <method>] [{ uri <uri> | uri-lf <fmt> }>] [ver <version>] [hdr <name> <fmt>]\* [{ body <string> | body-lf <fmt> }] [comment <msg>] ``` Add a possible list of headers and/or a body to the request sent during HTTP health checks. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` comment <msg> defines a message to report if the rule evaluation fails. meth <method> is the optional HTTP method used with the requests. When not set, the "OPTIONS" method is used, as it generally requires low server processing and is easy to filter out from the logs. Any method may be used, though it is not recommended to invent non-standard ones. uri <uri> is optional and set the URI referenced in the HTTP requests to the string <uri>. It defaults to "/" which is accessible by default on almost any server, but may be changed to any other URI. Query strings are permitted. uri-lf <fmt> is optional and set the URI referenced in the HTTP requests using the log-format string <fmt>. It defaults to "/" which is accessible by default on almost any server, but may be changed to any other URI. Query strings are permitted. ver <version> is the optional HTTP version string. It defaults to "HTTP/1.0" but some servers might behave incorrectly in HTTP 1.0, so turning it to HTTP/1.1 may sometimes help. Note that the Host field is mandatory in HTTP/1.1, use "[hdr](#hdr)" argument to add it. hdr <name> <fmt> adds the HTTP header field whose name is specified in <name> and whose value is defined by <fmt>, which follows to the log-format rules. body <string> add the body defined by <string> to the request sent during HTTP health checks. If defined, the "Content-Length" header is thus automatically added to the request. body-lf <fmt> add the body defined by the log-format string <fmt> to the request sent during HTTP health checks. If defined, the "Content-Length" header is thus automatically added to the request. ``` ``` In addition to the request line defined by the "[option httpchk](#option%20httpchk)" directive, this one is the valid way to add some headers and optionally a body to the request sent during HTTP health checks. If a body is defined, the associate "Content-Length" header is automatically added. Thus, this header or "Transfer-encoding" header should not be present in the request provided by "[http-check send](#http-check%20send)". If so, it will be ignored. The old trick consisting to add headers after the version string on the "[option httpchk](#option%20httpchk)" line is now deprecated. Also "[http-check send](#http-check%20send)" doesn't support HTTP keep-alive. Keep in mind that it will automatically append a "Connection: close" header, unless a Connection header has already already been configured via a hdr entry. Note that the Host header and the request authority, when both defined, are automatically synchronized. It means when the HTTP request is sent, when a Host is inserted in the request, the request authority is accordingly updated. Thus, don't be surprised if the Host header value overwrites the configured request authority. Note also for now, no Host header is automatically added in HTTP/1.1 or above requests. You should add it explicitly. ``` **See also :** "[option httpchk](#option%20httpchk)", "[http-check send-state](#http-check%20send-state)" and "[http-check expect](#http-check%20expect)". **http-check send-state** ``` Enable emission of a state header with HTTP health checks ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` When this option is set, HAProxy will systematically send a special header "X-Haproxy-Server-State" with a list of parameters indicating to each server how they are seen by HAProxy. This can be used for instance when a server is manipulated without access to HAProxy and the operator needs to know whether HAProxy still sees it up or not, or if the server is the last one in a farm. The header is composed of fields delimited by semi-colons, the first of which is a word ("UP", "DOWN", "NOLB"), possibly followed by a number of valid checks on the total number before transition, just as appears in the stats interface. Next headers are in the form "<variable>=<value>", indicating in no specific order some values available in the stats interface : - a variable "address", containing the address of the backend server. This corresponds to the <address> field in the server declaration. For unix domain sockets, it will read "unix". - a variable "[port](#port)", containing the port of the backend server. This corresponds to the <port> field in the server declaration. For unix domain sockets, it will read "unix". - a variable "[name](#name)", containing the name of the backend followed by a slash ("/") then the name of the server. This can be used when a server is checked in multiple backends. - a variable "[node](#node)" containing the name of the HAProxy node, as set in the global "[node](#node)" variable, otherwise the system's hostname if unspecified. - a variable "[weight](#weight)" indicating the weight of the server, a slash ("/") and the total weight of the farm (just counting usable servers). This helps to know if other servers are available to handle the load when this one fails. - a variable "scur" indicating the current number of concurrent connections on the server, followed by a slash ("/") then the total number of connections on all servers of the same backend. - a variable "qcur" indicating the current number of requests in the server's queue. Example of a header received by the application server : >>> X-Haproxy-Server-State: UP 2/3; name=bck/srv2; node=lb1; weight=1/2; \ scur=13/22; qcur=0 ``` **See also :** "[option httpchk](#option%20httpchk)", "[http-check disable-on-404](#http-check%20disable-on-404)" and "[http-check send](#http-check%20send)". ``` http-check set-var(<var-name>[,<cond> ...]) <expr> http-check set-var-fmt(<var-name>[,<cond> ...]) <fmt> This operation sets the content of a variable. The variable is declared inline. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <var-name> The name of the variable starts with an indication about its scope. The scopes allowed for http-check are: "[proc](#proc)" : the variable is shared with the whole process. "sess" : the variable is shared with the tcp-check session. "[check](#check)": the variable is declared for the lifetime of the tcp-check. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.', and '-'. <cond> A set of conditions that must all be true for the variable to actually be set (such as "ifnotempty", "ifgt" ...). See the set-var converter's description for a full list of possible conditions. <expr> Is a sample-fetch expression potentially followed by converters. <fmt> This is the value expressed using log-format rules (see Custom Log Format in [section 8.2.4](#8.2.4)). ``` Examples : ``` http-check set-var(check.port) int(1234) http-check set-var-fmt(check.port) "name=%H" ``` **http-check unset-var**(<var-name>) ``` Free a reference to a variable within its scope. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <var-name> The name of the variable starts with an indication about its scope. The scopes allowed for http-check are: "[proc](#proc)" : the variable is shared with the whole process. "sess" : the variable is shared with the tcp-check session. "[check](#check)": the variable is declared for the lifetime of the tcp-check. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.', and '-'. ``` Examples : ``` http-check unset-var(check.port) ``` **http-error status** <code> [content-type <type>] [ { default-errorfiles | errorfile <file> | errorfiles <name> | file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] [ hdr <name> <fmt> ]\* ``` Defines a custom error message to use instead of errors generated by HAProxy. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` status <code> is the HTTP status code. It must be specified. Currently, HAProxy is capable of generating codes 200, 400, 401, 403, 404, 405, 407, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504. content-type <type> is the response content type, for instance "text/plain". This parameter is ignored and should be omitted when an errorfile is configured or when the payload is empty. Otherwise, it must be defined. default-errorfiles Reset the previously defined error message for current proxy for the status <code>. If used on a backend, the frontend error message is used, if defined. If used on a frontend, the default error message is used. errorfile <file> designates a file containing the full HTTP response. It is recommended to follow the common practice of appending ".http" to the filename so that people do not confuse the response with HTML error pages, and to use absolute paths, since files are read before any chroot is performed. errorfiles <name> designates the http-errors section to use to import the error message with the status code <code>. If no such message is found, the proxy's error messages are considered. file <file> specifies the file to use as response payload. If the file is not empty, its content-type must be set as argument to "content-type", otherwise, any "content-type" argument is ignored. <file> is considered as a raw string. string <str> specifies the raw string to use as response payload. The content-type must always be set as argument to "content-type". lf-file <file> specifies the file to use as response payload. If the file is not empty, its content-type must be set as argument to "content-type", otherwise, any "content-type" argument is ignored. <file> is evaluated as a log-format string. lf-string <str> specifies the log-format string to use as response payload. The content-type must always be set as argument to "content-type". hdr <name> <fmt> adds to the response the HTTP header field whose name is specified in <name> and whose value is defined by <fmt>, which follows to the log-format rules. This parameter is ignored if an errorfile is used. ``` ``` This directive may be used instead of "errorfile", to define a custom error message. As "errorfile" directive, it is used for errors detected and returned by HAProxy. If an errorfile is defined, it is parsed when HAProxy starts and must be valid according to the HTTP standards. The generated response must not exceed the configured buffer size (BUFFSIZE), otherwise an internal error will be returned. Finally, if you consider to use some http-after-response rules to rewrite these errors, the reserved buffer space should be available (see "[tune.maxrewrite](#tune.maxrewrite)"). The files are read at the same time as the configuration and kept in memory. For this reason, the errors continue to be returned even when the process is chrooted, and no file change is considered while the process is running. Note: 400/408/500 errors emitted in early stage of the request parsing are handled by the multiplexer at a lower level. No custom formatting is supported at this level. Thus only static error messages, defined with "errorfile" directive, are supported. However, this limitation only exists during the request headers parsing or between two transactions. ``` **See also :** "errorfile", "[errorfiles](#errorfiles)", "[errorloc](#errorloc)", "[errorloc302](#errorloc302)", "[errorloc303](#errorloc303)" and [section 3.8](#3.8) about http-errors. **http-request** <action> [options...] [ { if | unless } <condition> ] ``` Access control for Layer 7 requests ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | yes | ``` The http-request statement defines a set of rules which apply to layer 7 processing. The rules are evaluated in their declaration order when they are met in a frontend, listen or backend section. Any rule may optionally be followed by an ACL-based condition, in which case it will only be evaluated if the condition is true. The first keyword is the rule's action. Several types of actions are supported: - add-acl(<file-name>) <key fmt> - add-header <name> <fmt> - allow - auth [realm <realm>] - cache-use <name> - capture <sample> [ len <length> | id <id> ] - del-acl(<file-name>) <key fmt> - del-header <name> [ -m <meth> ] - del-map(<file-name>) <key fmt> - deny [ { status | deny_status } <code>] ... - disable-l7-retry - do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr> - early-hint <name> <fmt> - normalize-uri <normalizer> - redirect <rule> - reject - replace-header <name> <match-regex> <replace-fmt> - replace-path <match-regex> <replace-fmt> - replace-pathq <match-regex> <replace-fmt> - replace-uri <match-regex> <replace-fmt> - replace-value <name> <match-regex> <replace-fmt> - return [status <code>] [content-type <type>] ... - sc-inc-gpc(<idx>,<sc-id>) - sc-inc-gpc0(<sc-id>) - sc-inc-gpc1(<sc-id>) - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } - sc-set-gpt0(<sc-id>) { <int> | <expr> } - set-bandwidth-limit <name> [limit <expr>] [period <expr>] - set-dst <expr> - set-dst-port <expr> - set-header <name> <fmt> - set-log-level <level> - set-map(<file-name>) <key fmt> <value fmt> - set-mark <mark> - set-method <fmt> - set-nice <nice> - set-path <fmt> - set-pathq <fmt> - set-priority-class <expr> - set-priority-offset <expr> - set-query <fmt> - set-src <expr> - set-src-port <expr> - set-timeout { server | tunnel } { <timeout> | <expr> } - set-tos <tos> - set-uri <fmt> - set-var(<var-name>[,<cond> ...]) <expr> - set-var-fmt(<var-name>[,<cond> ...]) <fmt> - send-spoe-group <engine-name> <group-name> - silent-drop [ rst-ttl <ttl> ] - strict-mode { on | off } - tarpit [ { status | deny_status } <code>] ... - track-sc0 <key> [table <table>] - track-sc1 <key> [table <table>] - track-sc2 <key> [table <table>] - unset-var(<var-name>) - use-service <service-name> - wait-for-body time <time> [ at-least <bytes> ] - wait-for-handshake - cache-use <name> The supported actions are described below. There is no limit to the number of http-request statements per instance. This directive is only available from named defaults sections, not anonymous ones. Rules defined in the defaults section are evaluated before ones in the associated proxy section. To avoid ambiguities, in this case the same defaults section cannot be used by proxies with the frontend capability and by proxies with the backend capability. It means a listen section cannot use a defaults section defining such rules. ``` Example: ``` acl nagios src 192.168.129.3 acl local_net src 192.168.0.0/16 acl auth_ok http_auth(L1) http-request allow if nagios http-request allow if local_net auth_ok http-request auth realm Gimme if local_net auth_ok http-request deny ``` Example: ``` acl key req.hdr(X-Add-Acl-Key) -m found acl add path /addacl acl del path /delacl acl myhost hdr(Host) -f myhost.lst http-request add-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key add http-request del-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key del ``` Example: ``` acl value req.hdr(X-Value) -m found acl setmap path /setmap acl delmap path /delmap use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found } http-request set-map(map.lst) %[src] %[req.hdr(X-Value)] if setmap value http-request del-map(map.lst) %[src] if delmap ``` **See also :** "[stats http-request](#stats%20http-request)", [section 3.4](#3.4) about userlists and [section 7](#7) about ACL usage. **http-request add-acl**(<file-name>) <key fmt> [ { if | unless } <condition> ] ``` This is used to add a new entry into an ACL. The ACL must be loaded from a file (even a dummy empty file). The file name of the ACL to be updated is passed between parentheses. It takes one argument: <key fmt>, which follows log-format rules, to collect content of the new entry. It performs a lookup in the ACL before insertion, to avoid duplicated (or more) values. This lookup is done by a linear search and can be expensive with large lists! It is the equivalent of the "add acl" command from the stats socket, but can be triggered by an HTTP request. ``` **http-request add-header** <name> <fmt> [ { if | unless } <condition> ] ``` This appends an HTTP header field whose name is specified in <name> and whose value is defined by <fmt> which follows the log-format rules (see Custom Log Format in [section 8.2.4](#8.2.4)). This is particularly useful to pass connection-specific information to the server (e.g. the client's SSL certificate), or to combine several headers into one. This rule is not final, so it is possible to add other similar rules. Note that header addition is performed immediately, so one rule might reuse the resulting header from a previous rule. ``` **http-request allow** [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and lets the request pass the check. No further "http-request" rules are evaluated for the current section. ``` **http-request auth** [realm <realm>] [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and immediately responds with an HTTP 401 or 407 error code to invite the user to present a valid user name and password. No further "http-request" rules are evaluated. An optional "realm" parameter is supported, it sets the authentication realm that is returned with the response (typically the application's name). The corresponding proxy's error message is used. It may be customized using an "errorfile" or an "[http-error](#http-error)" directive. For 401 responses, all occurrences of the WWW-Authenticate header are removed and replaced by a new one with a basic authentication challenge for realm "<realm>". For 407 responses, the same is done on the Proxy-Authenticate header. If the error message must not be altered, consider to use "[http-request return](#http-request%20return)" rule instead. ``` Example: ``` acl auth_ok http_auth_group(L1) G1 http-request auth unless auth_ok ``` **http-request cache-use** <name> [ { if | unless } <condition> ] ``` See [section 6.2](#6.2) about cache setup. ``` **http-request capture** <sample> [ len <length> | id <id> ] [ { if | unless } <condition> ] ``` This captures sample expression <sample> from the request buffer, and converts it to a string of at most <len> characters. The resulting string is stored into the next request "[capture](#capture)" slot, so it will possibly appear next to some captured HTTP headers. It will then automatically appear in the logs, and it will be possible to extract it using sample fetch rules to feed it into headers or anything. The length should be limited given that this size will be allocated for each capture during the whole session life. Please check [section 7.3](#7.3) (Fetching samples) and "[capture request header](#capture%20request%20header)" for more information. If the keyword "id" is used instead of "len", the action tries to store the captured string in a previously declared capture slot. This is useful to run captures in backends. The slot id can be declared by a previous directive "[http-request capture](#http-request%20capture)" or with the "[declare capture](#declare%20capture)" keyword. When using this action in a backend, double check that the relevant frontend(s) have the required capture slots otherwise, this rule will be ignored at run time. This can't be detected at configuration parsing time due to HAProxy's ability to dynamically resolve backend name at runtime. ``` **http-request del-acl**(<file-name>) <key fmt> [ { if | unless } <condition> ] ``` This is used to delete an entry from an ACL. The ACL must be loaded from a file (even a dummy empty file). The file name of the ACL to be updated is passed between parentheses. It takes one argument: <key fmt>, which follows log-format rules, to collect content of the entry to delete. It is the equivalent of the "del acl" command from the stats socket, but can be triggered by an HTTP request. ``` **http-request del-header** <name> [ -m <meth> ] [ { if | unless } <condition> ] ``` This removes all HTTP header fields whose name is specified in <name>. <meth> is the matching method, applied on the header name. Supported matching methods are "[str](#str)" (exact match), "beg" (prefix match), "end" (suffix match), "[sub](#sub)" (substring match) and "reg" (regex match). If not specified, exact matching method is used. ``` **http-request del-map**(<file-name>) <key fmt> [ { if | unless } <condition> ] ``` This is used to delete an entry from a MAP. The MAP must be loaded from a file (even a dummy empty file). The file name of the MAP to be updated is passed between parentheses. It takes one argument: <key fmt>, which follows log-format rules, to collect content of the entry to delete. It takes one argument: "file name" It is the equivalent of the "del map" command from the stats socket, but can be triggered by an HTTP request. ``` **http-request deny** [deny\_status <status>] [ { if | unless } <condition> ] **http-request deny** [ { status | deny\_status } <code>] [content-type <type>] [ { default-errorfiles | errorfile <file> | errorfiles <name> | file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] [ hdr <name> <fmt> ]\* [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and immediately rejects the request. By default an HTTP 403 error is returned. But the response may be customized using same syntax than "[http-request return](#http-request%20return)" rules. Thus, see "http-request return" for details. For compatibility purpose, when no argument is defined, or only "deny_status", the argument "default-errorfiles" is implied. It means "http-request deny [deny_status <status>]" is an alias of "http-request deny [status <status>] default-errorfiles". No further "http-request" rules are evaluated. See also "[http-request return](#http-request%20return)". ``` **http-request disable-l7-retry** [ { if | unless } <condition> ] ``` This disables any attempt to retry the request if it fails for any other reason than a connection failure. This can be useful for example to make sure POST requests aren't retried on failure. ``` **http-request do-resolve**(<var>,<resolvers>,[ipv4,ipv6]) <expr> [ { if | unless } <condition> ] ``` This action performs a DNS resolution of the output of <expr> and stores the result in the variable <var>. It uses the DNS resolvers section pointed by <resolvers>. It is possible to choose a resolution preference using the optional arguments 'ipv4' or 'ipv6'. When performing the DNS resolution, the client side connection is on pause waiting till the end of the resolution. If an IP address can be found, it is stored into <var>. If any kind of error occurs, then <var> is not set. One can use this action to discover a server IP address at run time and based on information found in the request (IE a Host header). If this action is used to find the server's IP address (using the "set-dst" action), then the server IP address in the backend must be set to 0.0.0.0. The do-resolve action takes an host-only parameter, any port must be removed from the string. ``` Example: ``` resolvers mydns nameserver local 127.0.0.53:53 nameserver google 8.8.8.8:53 timeout retry 1s hold valid 10s hold nx 3s hold other 3s hold obsolete 0s accepted_payload_size 8192 frontend fe bind 10.42.0.1:80 http-request do-resolve(txn.myip,mydns,ipv4) hdr(Host),host_only http-request capture var(txn.myip) len 40 # return 503 when the variable is not set, # which mean DNS resolution error use_backend b_503 unless { var(txn.myip) -m found } default_backend be backend b_503 # dummy backend used to return 503. # one can use the errorfile directive to send a nice # 503 error page to end users backend be # rule to prevent HAProxy from reconnecting to services # on the local network (forged DNS name used to scan the network) http-request deny if { var(txn.myip) -m ip 127.0.0.0/8 10.0.0.0/8 } http-request set-dst var(txn.myip) server clear 0.0.0.0:0 ``` ``` NOTE: Don't forget to set the "protection" rules to ensure HAProxy won't be used to scan the network or worst won't loop over itself... ``` **http-request early-hint** <name> <fmt> [ { if | unless } <condition> ] ``` This is used to build an HTTP 103 Early Hints response prior to any other one. This appends an HTTP header field to this response whose name is specified in <name> and whose value is defined by <fmt> which follows the log-format rules (see Custom Log Format in [section 8.2.4](#8.2.4)). This is particularly useful to pass to the client some Link headers to preload resources required to render the HTML documents. See RFC 8297 for more information. ``` **http-request normalize-uri** <normalizer> [ { if | unless } <condition> ] **http-request normalize-uri fragment-encode** [ { if | unless } <condition> ] **http-request normalize-uri fragment-strip** [ { if | unless } <condition> ] **http-request normalize-uri path-merge-slashes** [ { if | unless } <condition> ] **http-request normalize-uri path-strip-dot** [ { if | unless } <condition> ] **http-request normalize-uri path-strip-dotdot** [ full ] [ { if | unless } <condition> ] **http-request normalize-uri percent-decode-unreserved** [ strict ] [ { if | unless } <condition> ] **http-request normalize-uri percent-to-uppercase** [ strict ] [ { if | unless } <condition> ] **http-request normalize-uri query-sort-by-name** [ { if | unless } <condition> ] ``` Performs normalization of the request's URI. URI normalization in HAProxy 2.4 is currently available as an experimental technical preview. As such, it requires the global directive 'expose-experimental-directives' first to be able to invoke it. You should be prepared that the behavior of normalizers might change to fix possible issues, possibly breaking proper request processing in your infrastructure. Each normalizer handles a single type of normalization to allow for a fine-grained selection of the level of normalization that is appropriate for the supported backend. As an example the "path-strip-dotdot" normalizer might be useful for a static fileserver that directly maps the requested URI to the path within the local filesystem. However it might break routing of an API that expects a specific number of segments in the path. It is important to note that some normalizers might result in unsafe transformations for broken URIs. It might also be possible that a combination of normalizers that are safe by themselves results in unsafe transformations when improperly combined. As an example the "percent-decode-unreserved" normalizer might result in unexpected results when a broken URI includes bare percent characters. One such a broken URI is "/%%36%36" which would be decoded to "/%66" which in turn is equivalent to "/f". By specifying the "strict" option requests to such a broken URI would safely be rejected. The following normalizers are available: - fragment-encode: Encodes "#" as "%23". The "fragment-strip" normalizer should be preferred, unless it is known that broken clients do not correctly encode '#' within the path component. ``` Example: ``` - /#foo -> /%23foo ``` ``` - fragment-strip: Removes the URI's "fragment" component. According to RFC 3986#3.5 the "fragment" component of an URI should not be sent, but handled by the User Agent after retrieving a resource. This normalizer should be applied first to ensure that the fragment is not interpreted as part of the request's path component. ``` Example: ``` - /#foo -> / ``` ``` - path-strip-dot: Removes "/./" segments within the "[path](#path)" component (RFC 3986#6.2.2.3). Segments including percent encoded dots ("%2E") will not be detected. Use the "percent-decode-unreserved" normalizer first if this is undesired. ``` Example: ``` - /. -> / - /./bar/ -> /bar/ - /a/./a -> /a/a - /.well-known/ -> /.well-known/ (no change) ``` ``` - path-strip-dotdot: Normalizes "/../" segments within the "[path](#path)" component (RFC 3986#6.2.2.3). This merges segments that attempt to access the parent directory with their preceding segment. Empty segments do not receive special treatment. Use the "merge-slashes" normalizer first if this is undesired. Segments including percent encoded dots ("%2E") will not be detected. Use the "percent-decode-unreserved" normalizer first if this is undesired. ``` Example: ``` - /foo/../ -> / - /foo/../bar/ -> /bar/ - /foo/bar/../ -> /foo/ - /../bar/ -> /../bar/ - /bar/../../ -> /../ - /foo//../ -> /foo/ - /foo/%2E%2E/ -> /foo/%2E%2E/ ``` ``` If the "full" option is specified then "../" at the beginning will be removed as well: ``` Example: ``` - /../bar/ -> /bar/ - /bar/../../ -> / ``` ``` - path-merge-slashes: Merges adjacent slashes within the "[path](#path)" component into a single slash. ``` Example: ``` - // -> / - /foo//bar -> /foo/bar ``` ``` - percent-decode-unreserved: Decodes unreserved percent encoded characters to their representation as a regular character (RFC 3986#6.2.2.2). The set of unreserved characters includes all letters, all digits, "-", ".", "_", and "~". ``` Example: ``` - /%61dmin -> /admin - /foo%3Fbar=baz -> /foo%3Fbar=baz (no change) - /%%36%36 -> /%66 (unsafe) - /%ZZ -> /%ZZ ``` ``` If the "strict" option is specified then invalid sequences will result in a HTTP 400 Bad Request being returned. ``` Example: ``` - /%%36%36 -> HTTP 400 - /%ZZ -> HTTP 400 ``` ``` - percent-to-uppercase: Uppercases letters within percent-encoded sequences (RFC 3986#6.2.2.1). ``` Example: ``` - /%6f -> /%6F - /%zz -> /%zz ``` ``` If the "strict" option is specified then invalid sequences will result in a HTTP 400 Bad Request being returned. ``` Example: ``` - /%zz -> HTTP 400 ``` ``` - query-sort-by-name: Sorts the query string parameters by parameter name. Parameters are assumed to be delimited by '&'. Shorter names sort before longer names and identical parameter names maintain their relative order. ``` Example: ``` - /?c=3&a=1&b=2 -> /?a=1&b=2&c=3 - /?aaa=3&a=1&aa=2 -> /?a=1&aa=2&aaa=3 - /?a=3&b=4&a=1&b=5&a=2 -> /?a=3&a=1&a=2&b=4&b=5 ``` **http-request redirect** <rule> [ { if | unless } <condition> ] ``` This performs an HTTP redirection based on a redirect rule. This is exactly the same as the "[redirect](#redirect)" statement except that it inserts a redirect rule which can be processed in the middle of other "http-request" rules and that these rules use the "[log-format](#log-format)" strings. See the "[redirect](#redirect)" keyword for the rule's syntax. ``` **http-request reject** [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and immediately closes the connection without sending any response. It acts similarly to the "[tcp-request content reject](#tcp-request%20content%20reject)" rules. It can be useful to force an immediate connection closure on HTTP/2 connections. ``` **http-request replace-header** <name> <match-regex> <replace-fmt> [ { if | unless } <condition> ] ``` This matches the value of all occurrences of header field <name> against <match-regex>. Matching is performed case-sensitively. Matching values are completely replaced by <replace-fmt>. Format characters are allowed in <replace-fmt> and work like <fmt> arguments in "[http-request add-header](#http-request%20add-header)". Standard back-references using the backslash ('\') followed by a number are supported. This action acts on whole header lines, regardless of the number of values they may contain. Thus it is well-suited to process headers naturally containing commas in their value, such as If-Modified-Since. Headers that contain a comma-separated list of values, such as Accept, should be processed using "[http-request replace-value](#http-request%20replace-value)". ``` Example: ``` http-request replace-header Cookie foo=([^;]*);(.*) foo=\1;ip=%bi;\2 # applied to: Cookie: foo=foobar; expires=Tue, 14-Jun-2016 01:40:45 GMT; # outputs: Cookie: foo=foobar;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT; # assuming the backend IP is 192.168.1.20 http-request replace-header User-Agent curl foo # applied to: User-Agent: curl/7.47.0 # outputs: User-Agent: foo ``` **http-request replace-path** <match-regex> <replace-fmt> [ { if | unless } <condition> ] ``` This works like "replace-header" except that it works on the request's path component instead of a header. The path component starts at the first '/' after an optional scheme+authority and ends before the question mark. Thus, the replacement does not modify the scheme, the authority and the query-string. It is worth noting that regular expressions may be more expensive to evaluate than certain ACLs, so rare replacements may benefit from a condition to avoid performing the evaluation at all if it does not match. ``` Example: ``` # prefix /foo : turn /bar?q=1 into /foo/bar?q=1 : http-request replace-path (.*) /foo\1 # strip /foo : turn /foo/bar?q=1 into /bar?q=1 http-request replace-path /foo/(.*) /\1 # or more efficient if only some requests match : http-request replace-path /foo/(.*) /\1 if { url_beg /foo/ } ``` **http-request replace-pathq** <match-regex> <replace-fmt> [ { if | unless } <condition> ] ``` This does the same as "[http-request replace-path](#http-request%20replace-path)" except that the path contains the query-string if any is present. Thus, the path and the query-string are replaced. ``` Example: ``` # suffix /foo : turn /bar?q=1 into /bar/foo?q=1 : http-request replace-pathq ([^?]*)(\?(.*))? \1/foo\2 ``` **http-request replace-uri** <match-regex> <replace-fmt> [ { if | unless } <condition> ] ``` This works like "replace-header" except that it works on the request's URI part instead of a header. The URI part may contain an optional scheme, authority or query string. These are considered to be part of the value that is matched against. It is worth noting that regular expressions may be more expensive to evaluate than certain ACLs, so rare replacements may benefit from a condition to avoid performing the evaluation at all if it does not match. IMPORTANT NOTE: historically in HTTP/1.x, the vast majority of requests sent by browsers use the "origin form", which differs from the "absolute form" in that they do not contain a scheme nor authority in the URI portion. Mostly only requests sent to proxies, those forged by hand and some emitted by certain applications use the absolute form. As such, "replace-uri" usually works fine most of the time in HTTP/1.x with rules starting with a "/". But with HTTP/2, clients are encouraged to send absolute URIs only, which look like the ones HTTP/1 clients use to talk to proxies. Such partial replace-uri rules may then fail in HTTP/2 when they work in HTTP/1. Either the rules need to be adapted to optionally match a scheme and authority, or replace-path should be used. ``` Example: ``` # rewrite all "http" absolute requests to "https": http-request replace-uri ^http://(.*) https://\1 # prefix /foo : turn /bar?q=1 into /foo/bar?q=1 : http-request replace-uri ([^/:]*://[^/]*)?(.*) \1/foo\2 ``` **http-request replace-value** <name> <match-regex> <replace-fmt> [ { if | unless } <condition> ] ``` This works like "replace-header" except that it matches the regex against every comma-delimited value of the header field <name> instead of the entire header. This is suited for all headers which are allowed to carry more than one value. An example could be the Accept header. ``` Example: ``` http-request replace-value X-Forwarded-For ^192\.168\.(.*)$ 172.16.\1 # applied to: X-Forwarded-For: 192.168.10.1, 192.168.13.24, 10.0.0.37 # outputs: X-Forwarded-For: 172.16.10.1, 172.16.13.24, 10.0.0.37 ``` **http-request return** [status <code>] [content-type <type>] [ { default-errorfiles | errorfile <file> | errorfiles <name> | file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] [ hdr <name> <fmt> ]\* [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and immediately returns a response. The default status code used for the response is 200. It can be optionally specified as an arguments to "[status](#status)". The response content-type may also be specified as an argument to "content-type". Finally the response itself may be defined. It can be a full HTTP response specifying the errorfile to use, or the response payload specifying the file or the string to use. These rules are followed to create the response : * If neither the errorfile nor the payload to use is defined, a dummy response is returned. Only the "[status](#status)" argument is considered. It can be any code in the range [200, 599]. The "content-type" argument, if any, is ignored. * If "default-errorfiles" argument is set, the proxy's errorfiles are considered. If the "[status](#status)" argument is defined, it must be one of the status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. * If a specific errorfile is defined, with an "errorfile" argument, the corresponding file, containing a full HTTP response, is returned. Only the "[status](#status)" argument is considered. It must be one of the status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. * If an http-errors section is defined, with an "[errorfiles](#errorfiles)" argument, the corresponding file in the specified http-errors section, containing a full HTTP response, is returned. Only the "[status](#status)" argument is considered. It must be one of the status code handled by HAProxy (200, 400, 403, 404, 405, 408, 410, 413, 425, 429, 500, 501, 502, 503, and 504). The "content-type" argument, if any, is ignored. * If a "file" or a "lf-file" argument is specified, the file's content is used as the response payload. If the file is not empty, its content-type must be set as argument to "content-type". Otherwise, any "content-type" argument is ignored. With a "lf-file" argument, the file's content is evaluated as a log-format string. With a "file" argument, it is considered as a raw content. * If a "string" or "lf-string" argument is specified, the defined string is used as the response payload. The content-type must always be set as argument to "content-type". With a "lf-string" argument, the string is evaluated as a log-format string. With a "string" argument, it is considered as a raw string. When the response is not based on an errorfile, it is possible to append HTTP header fields to the response using "[hdr](#hdr)" arguments. Otherwise, all "[hdr](#hdr)" arguments are ignored. For each one, the header name is specified in <name> and its value is defined by <fmt> which follows the log-format rules. Note that the generated response must be smaller than a buffer. And to avoid any warning, when an errorfile or a raw file is loaded, the buffer space reserved for the headers rewriting should also be free. No further "http-request" rules are evaluated. ``` Example: ``` http-request return errorfile /etc/haproxy/errorfiles/200.http \ if { path /ping } http-request return content-type image/x-icon file /var/www/favicon.ico \ if { path /favicon.ico } http-request return status 403 content-type text/plain \ lf-string "Access denied. IP %[src] is blacklisted." \ if { src -f /etc/haproxy/blacklist.lst } ``` **http-request sc-inc-gpc**(<idx>,<sc-id>) [ { if | unless } <condition> ] ``` This actions increments the General Purpose Counter at the index <idx> of the array associated to the sticky counter designated by <sc-id>. If an error occurs, this action silently fails and the actions evaluation continues. <idx> is an integer between 0 and 99 and <sc-id> is an integer between 0 and 2. It also silently fails if the there is no GPC stored at this index. This action applies only to the 'gpc' and 'gpc_rate' array data_types (and not to the legacy 'gpc0', 'gpc1', 'gpc0_rate' nor 'gpc1_rate' data_types). ``` **http-request sc-inc-gpc0**(<sc-id>) [ { if | unless } <condition> ] **http-request sc-inc-gpc1**(<sc-id>) [ { if | unless } <condition> ] ``` This actions increments the GPC0 or GPC1 counter according with the sticky counter designated by <sc-id>. If an error occurs, this action silently fails and the actions evaluation continues. ``` **http-request sc-set-gpt**(<idx>,<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] ``` This action sets the 32-bit unsigned GPT at the index <idx> of the array associated to the sticky counter designated by <sc-id> at the value of <int>/<expr>. The expected result is a boolean. If an error occurs, this action silently fails and the actions evaluation continues. <idx> is an integer between 0 and 99 and <sc-id> is an integer between 0 and 2. It also silently fails if the there is no GPT stored at this index. This action applies only to the 'gpt' array data_type (and not to the legacy 'gpt0' data-type). ``` **http-request sc-set-gpt0**(<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] ``` This action sets the 32-bit unsigned GPT0 tag according to the sticky counter designated by <sc-id> and the value of <int>/<expr>. The expected result is a boolean. If an error occurs, this action silently fails and the actions evaluation continues. ``` **http-request send-spoe-group** <engine-name> <group-name> [ { if | unless } <condition> ] ``` This action is used to trigger sending of a group of SPOE messages. To do so, the SPOE engine used to send messages must be defined, as well as the SPOE group to send. Of course, the SPOE engine must refer to an existing SPOE filter. If not engine name is provided on the SPOE filter line, the SPOE agent name must be used. ``` Arguments: ``` <engine-name> The SPOE engine name. <group-name> The SPOE group name as specified in the engine configuration. ``` **http-request set-bandwidth-limit** <name> [limit <expr>] [period <expr>] [ { if | unless } <condition> ] ``` This action is used to enable the bandwidth limitation filter <name>, either on the upload or download direction depending on the filter type. Custom limit and period may be defined, if and only if <name> references a per-stream bandwidth limitation filter. When a set-bandwidth-limit rule is executed, it first resets all settings of the filter to their defaults prior to enabling it. As a consequence, if several "set-bandwidth-limit" actions are executed for the same filter, only the last one is considered. Several bandwidth limitation filters can be enabled on the same stream. Note that this action cannot be used in a defaults section because bandwidth limitation filters cannot be defined in defaults sections. In addition, only the HTTP payload transfer is limited. The HTTP headers are not considered. ``` Arguments: ``` <expr> Is a standard HAProxy expression formed by a sample-fetch followed by some converters. The result is converted to an integer. It is interpreted as a size in bytes for the "limit" parameter and as a duration in milliseconds for the "period" parameter. ``` Example: ``` http-request set-bandwidth-limit global-limit http-request set-bandwidth-limit my-limit limit 1m period 10s ``` ``` See [section 9.7](#9.7) about bandwidth limitation filter setup. ``` **http-request set-dst** <expr> [ { if | unless } <condition> ] ``` This is used to set the destination IP address to the value of specified expression. Useful when a proxy in front of HAProxy rewrites destination IP, but provides the correct IP in a HTTP header; or you want to mask the IP for privacy. If you want to connect to the new address/port, use '0.0.0.0:0' as a server address in the backend. ``` Arguments: ``` <expr> Is a standard HAProxy expression formed by a sample-fetch followed by some converters. ``` Example: ``` http-request set-dst hdr(x-dst) http-request set-dst dst,ipmask(24) ``` ``` When possible, set-dst preserves the original destination port as long as the address family allows it, otherwise the destination port is set to 0. ``` **http-request set-dst-port** <expr> [ { if | unless } <condition> ] ``` This is used to set the destination port address to the value of specified expression. If you want to connect to the new address/port, use '0.0.0.0:0' as a server address in the backend. ``` Arguments: ``` <expr> Is a standard HAProxy expression formed by a sample-fetch followed by some converters. ``` Example: ``` http-request set-dst-port hdr(x-port) http-request set-dst-port int(4000) ``` ``` When possible, set-dst-port preserves the original destination address as long as the address family supports a port, otherwise it forces the destination address to IPv4 "0.0.0.0" before rewriting the port. ``` **http-request set-header** <name> <fmt> [ { if | unless } <condition> ] ``` This does the same as "[http-request add-header](#http-request%20add-header)" except that the header name is first removed if it existed. This is useful when passing security information to the server, where the header must not be manipulated by external users. Note that the new value is computed before the removal so it is possible to concatenate a value to an existing header. ``` Example: ``` http-request set-header X-Haproxy-Current-Date %T http-request set-header X-SSL %[ssl_fc] http-request set-header X-SSL-Session_ID %[ssl_fc_session_id,hex] http-request set-header X-SSL-Client-Verify %[ssl_c_verify] http-request set-header X-SSL-Client-DN %{+Q}[ssl_c_s_dn] http-request set-header X-SSL-Client-CN %{+Q}[ssl_c_s_dn(cn)] http-request set-header X-SSL-Issuer %{+Q}[ssl_c_i_dn] http-request set-header X-SSL-Client-NotBefore %{+Q}[ssl_c_notbefore] http-request set-header X-SSL-Client-NotAfter %{+Q}[ssl_c_notafter] ``` **http-request set-log-level** <level> [ { if | unless } <condition> ] ``` This is used to change the log level of the current request when a certain condition is met. Valid levels are the 8 syslog levels (see the "log" keyword) plus the special level "silent" which disables logging for this request. This rule is not final so the last matching rule wins. This rule can be useful to disable health checks coming from another equipment. ``` **http-request set-map**(<file-name>) <key fmt> <value fmt> [ { if | unless } <condition> ] ``` This is used to add a new entry into a MAP. The MAP must be loaded from a file (even a dummy empty file). The file name of the MAP to be updated is passed between parentheses. It takes 2 arguments: <key fmt>, which follows log-format rules, used to collect MAP key, and <value fmt>, which follows log-format rules, used to collect content for the new entry. It performs a lookup in the MAP before insertion, to avoid duplicated (or more) values. This lookup is done by a linear search and can be expensive with large lists! It is the equivalent of the "set map" command from the stats socket, but can be triggered by an HTTP request. ``` **http-request set-mark** <mark> [ { if | unless } <condition> ] ``` This is used to set the Netfilter/IPFW MARK on all packets sent to the client to the value passed in <mark> on platforms which support it. This value is an unsigned 32 bit value which can be matched by netfilter/ipfw and by the routing table or monitoring the packets through DTrace. It can be expressed both in decimal or hexadecimal format (prefixed by "0x"). This can be useful to force certain packets to take a different route (for example a cheaper network path for bulk downloads). This works on Linux kernels 2.6.32 and above and requires admin privileges, as well on FreeBSD and OpenBSD. ``` **http-request set-method** <fmt> [ { if | unless } <condition> ] ``` This rewrites the request method with the result of the evaluation of format string <fmt>. There should be very few valid reasons for having to do so as this is more likely to break something than to fix it. ``` **http-request set-nice** <nice> [ { if | unless } <condition> ] ``` This sets the "[nice](#nice)" factor of the current request being processed. It only has effect against the other requests being processed at the same time. The default value is 0, unless altered by the "[nice](#nice)" setting on the "bind" line. The accepted range is -1024..1024. The higher the value, the nicest the request will be. Lower values will make the request more important than other ones. This can be useful to improve the speed of some requests, or lower the priority of non-important requests. Using this setting without prior experimentation can cause some major slowdown. ``` **http-request set-path** <fmt> [ { if | unless } <condition> ] ``` This rewrites the request path with the result of the evaluation of format string <fmt>. The query string, if any, is left intact. If a scheme and authority is found before the path, they are left intact as well. If the request doesn't have a path ("*"), this one is replaced with the format. This can be used to prepend a directory component in front of a path for example. See also "[http-request set-query](#http-request%20set-query)" and "[http-request set-uri](#http-request%20set-uri)". ``` Example : ``` # prepend the host name before the path http-request set-path /%[hdr(host)]%[path] ``` **http-request set-pathq** <fmt> [ { if | unless } <condition> ] ``` This does the same as "[http-request set-path](#http-request%20set-path)" except that the query-string is also rewritten. It may be used to remove the query-string, including the question mark (it is not possible using "[http-request set-query](#http-request%20set-query)"). ``` **http-request set-priority-class** <expr> [ { if | unless } <condition> ] ``` This is used to set the queue priority class of the current request. The value must be a sample expression which converts to an integer in the range -2047..2047. Results outside this range will be truncated. The priority class determines the order in which queued requests are processed. Lower values have higher priority. ``` **http-request set-priority-offset** <expr> [ { if | unless } <condition> ] ``` This is used to set the queue priority timestamp offset of the current request. The value must be a sample expression which converts to an integer in the range -524287..524287. Results outside this range will be truncated. When a request is queued, it is ordered first by the priority class, then by the current timestamp adjusted by the given offset in milliseconds. Lower values have higher priority. Note that the resulting timestamp is is only tracked with enough precision for 524,287ms (8m44s287ms). If the request is queued long enough to where the adjusted timestamp exceeds this value, it will be misidentified as highest priority. Thus it is important to set "[timeout queue](#timeout%20queue)" to a value, where when combined with the offset, does not exceed this limit. ``` **http-request set-query** <fmt> [ { if | unless } <condition> ] ``` This rewrites the request's query string which appears after the first question mark ("?") with the result of the evaluation of format string <fmt>. The part prior to the question mark is left intact. If the request doesn't contain a question mark and the new value is not empty, then one is added at the end of the URI, followed by the new value. If a question mark was present, it will never be removed even if the value is empty. This can be used to add or remove parameters from the query string. See also "[http-request set-query](#http-request%20set-query)" and "[http-request set-uri](#http-request%20set-uri)". ``` Example: ``` # replace "%3D" with "=" in the query string http-request set-query %[query,regsub(%3D,=,g)] ``` **http-request set-src** <expr> [ { if | unless } <condition> ] ``` This is used to set the source IP address to the value of specified expression. Useful when a proxy in front of HAProxy rewrites source IP, but provides the correct IP in a HTTP header; or you want to mask source IP for privacy. All subsequent calls to "[src](#src)" fetch will return this value (see example). ``` Arguments : ``` <expr> Is a standard HAProxy expression formed by a sample-fetch followed by some converters. ``` ``` See also "[option forwardfor](#option%20forwardfor)". ``` Example: ``` http-request set-src hdr(x-forwarded-for) http-request set-src src,ipmask(24) # After the masking this will track connections # based on the IP address with the last byte zeroed out. http-request track-sc0 src ``` ``` When possible, set-src preserves the original source port as long as the address family allows it, otherwise the source port is set to 0. ``` **http-request set-src-port** <expr> [ { if | unless } <condition> ] ``` This is used to set the source port address to the value of specified expression. ``` Arguments: ``` <expr> Is a standard HAProxy expression formed by a sample-fetch followed by some converters. ``` Example: ``` http-request set-src-port hdr(x-port) http-request set-src-port int(4000) ``` ``` When possible, set-src-port preserves the original source address as long as the address family supports a port, otherwise it forces the source address to IPv4 "0.0.0.0" before rewriting the port. ``` **http-request set-timeout** { server | tunnel } { <timeout> | <expr> } [ { if | unless } <condition> ] ``` This action overrides the specified "server" or "tunnel" timeout for the current stream only. The timeout can be specified in millisecond or with any other unit if the number is suffixed by the unit as explained at the top of this document. It is also possible to write an expression which must returns a number interpreted as a timeout in millisecond. Note that the server/tunnel timeouts are only relevant on the backend side and thus this rule is only available for the proxies with backend capabilities. Also the timeout value must be non-null to obtain the expected results. ``` Example: ``` http-request set-timeout tunnel 5s http-request set-timeout server req.hdr(host),map_int(host.lst) ``` **http-request set-tos** <tos> [ { if | unless } <condition> ] ``` This is used to set the TOS or DSCP field value of packets sent to the client to the value passed in <tos> on platforms which support this. This value represents the whole 8 bits of the IP TOS field, and can be expressed both in decimal or hexadecimal format (prefixed by "0x"). Note that only the 6 higher bits are used in DSCP or TOS, and the two lower bits are always 0. This can be used to adjust some routing behavior on border routers based on some information from the request. See RFC 2474, 2597, 3260 and 4594 for more information. ``` **http-request set-uri** <fmt> [ { if | unless } <condition> ] ``` This rewrites the request URI with the result of the evaluation of format string <fmt>. The scheme, authority, path and query string are all replaced at once. This can be used to rewrite hosts in front of proxies, or to perform complex modifications to the URI such as moving parts between the path and the query string. If an absolute URI is set, it will be sent as is to HTTP/1.1 servers. If it is not the desired behavior, the host, the path and/or the query string should be set separately. See also "[http-request set-path](#http-request%20set-path)" and "[http-request set-query](#http-request%20set-query)". http-request set-var(<var-name>[,<cond> ...]) <expr> [ { if | unless } <condition> ] http-request set-var-fmt(<var-name>[,<cond> ...]) <fmt> [ { if | unless } <condition> ] This is used to set the contents of a variable. The variable is declared inline. ``` Arguments: ``` <var-name> The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response) "req" : the variable is shared only during request processing "res" : the variable is shared only during response processing This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9' and '_'. <cond> A set of conditions that must all be true for the variable to actually be set (such as "ifnotempty", "ifgt" ...). See the set-var converter's description for a full list of possible conditions. <expr> Is a standard HAProxy expression formed by a sample-fetch followed by some converters. <fmt> This is the value expressed using log-format rules (see Custom Log Format in [section 8.2.4](#8.2.4)). ``` Example: ``` http-request set-var(req.my_var) req.fhdr(user-agent),lower http-request set-var-fmt(txn.from) %[src]:%[src_port] ``` **http-request silent-drop** [ rst-ttl <ttl> ] [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and removes the client-facing connection in a configurable way: When called without the rst-ttl argument, we try to prevent sending any FIN or RST packet back to the client by using TCP_REPAIR. If this fails (mainly because of missing privileges), we fall back to sending a RST packet with a TTL of 1. The effect is that the client still sees an established connection while there is none on HAProxy, saving resources. However, stateful equipment placed between the HAProxy and the client (firewalls, proxies, load balancers) will also keep the established connection in their session tables. The optional rst-ttl changes this behaviour: TCP_REPAIR is not used, and a RST packet with a configurable TTL is sent. When set to a reasonable value, the RST packet travels through your own equipment, deleting the connection in your middle-boxes, but does not arrive at the client. Future packets from the client will then be dropped already by your middle-boxes. These "local RST"s protect your resources, but not the client's. Do not use it unless you fully understand how it works. ``` **http-request strict-mode** { on | off } [ { if | unless } <condition> ] ``` This enables or disables the strict rewriting mode for following rules. It does not affect rules declared before it and it is only applicable on rules performing a rewrite on the requests. When the strict mode is enabled, any rewrite failure triggers an internal error. Otherwise, such errors are silently ignored. The purpose of the strict rewriting mode is to make some rewrites optional while others must be performed to continue the request processing. By default, the strict rewriting mode is enabled. Its value is also reset when a ruleset evaluation ends. So, for instance, if you change the mode on the frontend, the default mode is restored when HAProxy starts the backend rules evaluation. ``` **http-request tarpit** [deny\_status <status>] [ { if | unless } <condition> ] **http-request tarpit** [ { status | deny\_status } <code>] [content-type <type>] [ { default-errorfiles | errorfile <file> | errorfiles <name> | file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] [ hdr <name> <fmt> ]\* [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and immediately blocks the request without responding for a delay specified by "[timeout tarpit](#timeout%20tarpit)" or "timeout connect" if the former is not set. After that delay, if the client is still connected, a response is returned so that the client does not suspect it has been tarpitted. Logs will report the flags "PT". The goal of the tarpit rule is to slow down robots during an attack when they're limited on the number of concurrent requests. It can be very efficient against very dumb robots, and will significantly reduce the load on firewalls compared to a "deny" rule. But when facing "correctly" developed robots, it can make things worse by forcing HAProxy and the front firewall to support insane number of concurrent connections. By default an HTTP error 500 is returned. But the response may be customized using same syntax than "[http-request return](#http-request%20return)" rules. Thus, see "[http-request return](#http-request%20return)" for details. For compatibility purpose, when no argument is defined, or only "deny_status", the argument "default-errorfiles" is implied. It means "http-request tarpit [deny_status <status>]" is an alias of "http-request tarpit [status <status>] default-errorfiles". No further "http-request" rules are evaluated. See also "[http-request return](#http-request%20return)" and "[http-request silent-drop](#http-request%20silent-drop)". ``` **http-request track-sc0** <key> [table <table>] [ { if | unless } <condition> ] **http-request track-sc1** <key> [table <table>] [ { if | unless } <condition> ] **http-request track-sc2** <key> [table <table>] [ { if | unless } <condition> ] ``` This enables tracking of sticky counters from current request. These rules do not stop evaluation and do not change default action. The number of counters that may be simultaneously tracked by the same connection is set in MAX_SESS_STKCTR at build time (reported in haproxy -vv) which defaults to 3, so the track-sc number is between 0 and (MAX_SESS_STKCTR-1). The first "track-sc0" rule executed enables tracking of the counters of the specified table as the first set. The first "track-sc1" rule executed enables tracking of the counters of the specified table as the second set. The first "track-sc2" rule executed enables tracking of the counters of the specified table as the third set. It is a recommended practice to use the first set of counters for the per-frontend counters and the second set for the per-backend ones. But this is just a guideline, all may be used everywhere. ``` Arguments : ``` <key> is mandatory, and is a sample expression rule as described in [section 7.3](#7.3). It describes what elements of the incoming request or connection will be analyzed, extracted, combined, and used to select which table entry to update the counters. <table> is an optional table to be used instead of the default one, which is the stick-table declared in the current proxy. All the counters for the matches and updates for the key will then be performed in that table until the session ends. ``` ``` Once a "track-sc*" rule is executed, the key is looked up in the table and if it is not found, an entry is allocated for it. Then a pointer to that entry is kept during all the session's life, and this entry's counters are updated as often as possible, every time the session's counters are updated, and also systematically when the session ends. Counters are only updated for events that happen after the tracking has been started. As an exception, connection counters and request counters are systematically updated so that they reflect useful information. If the entry tracks concurrent connection counters, one connection is counted for as long as the entry is tracked, and the entry will not expire during that time. Tracking counters also provides a performance advantage over just checking the keys, because only one table lookup is performed for all ACL checks that make use of it. ``` **http-request unset-var**(<var-name>) [ { if | unless } <condition> ] ``` This is used to unset a variable. See above for details about <var-name>. ``` Example: ``` http-request unset-var(req.my_var) ``` **http-request use-service** <service-name> [ { if | unless } <condition> ] ``` This directive executes the configured HTTP service to reply to the request and stops the evaluation of the rules. An HTTP service may choose to reply by sending any valid HTTP response or it may immediately close the connection without sending any response. Outside natives services, for instance the Prometheus exporter, it is possible to write your own services in Lua. No further "http-request" rules are evaluated. ``` Arguments : ``` <service-name> is mandatory. It is the service to call ``` Example: ``` http-request use-service prometheus-exporter if { path /metrics } ``` **http-request wait-for-body time** <time> [ at-least <bytes> ] [ { if | unless } <condition> ] ``` This will delay the processing of the request waiting for the payload for at most <time> milliseconds. if "at-least" argument is specified, HAProxy stops to wait the payload when the first <bytes> bytes are received. 0 means no limit, it is the default value. Regardless the "at-least" argument value, HAProxy stops to wait if the whole payload is received or if the request buffer is full. This action may be used as a replacement to "option http-buffer-request". ``` Arguments : ``` <time> is mandatory. It is the maximum time to wait for the body. It follows the HAProxy time format and is expressed in milliseconds. <bytes> is optional. It is the minimum payload size to receive to stop to wait. It follows the HAProxy size format and is expressed in bytes. ``` Example: ``` http-request wait-for-body time 1s at-least 1k if METH_POST ``` **See also :** "[option http-buffer-request](#option%20http-buffer-request)" **http-request wait-for-handshake** [ { if | unless } <condition> ] ``` This will delay the processing of the request until the SSL handshake happened. This is mostly useful to delay processing early data until we're sure they are valid. ``` **http-response** <action> <options...> [ { if | unless } <condition> ] ``` Access control for Layer 7 responses ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | yes | ``` The http-response statement defines a set of rules which apply to layer 7 processing. The rules are evaluated in their declaration order when they are met in a frontend, listen or backend section. Any rule may optionally be followed by an ACL-based condition, in which case it will only be evaluated if the condition is true. Since these rules apply on responses, the backend rules are applied first, followed by the frontend's rules. The first keyword is the rule's action. Several types of actions are supported: - add-acl(<file-name>) <key fmt> - add-header <name> <fmt> - allow - cache-store <name> - capture <sample> id <id> - del-acl(<file-name>) <key fmt> - del-header <name> [ -m <meth> ] - del-map(<file-name>) <key fmt> - deny [ { status | deny_status } <code>] ... - redirect <rule> - replace-header <name> <regex-match> <replace-fmt> - replace-value <name> <regex-match> <replace-fmt> - return [status <code>] [content-type <type>] ... - sc-inc-gpc(<idx>,<sc-id>) - sc-inc-gpc0(<sc-id>) - sc-inc-gpc1(<sc-id>) - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } - sc-set-gpt0(<sc-id>) { <int> | <expr> } - send-spoe-group <engine-name> <group-name> - set-bandwidth-limit <name> [limit <expr>] [period <expr>] - set-header <name> <fmt> - set-log-level <level> - set-map(<file-name>) <key fmt> <value fmt> - set-mark <mark> - set-nice <nice> - set-status <status> [reason <str>] - set-tos <tos> - set-var(<var-name>[,<cond> ...]) <expr> - set-var-fmt(<var-name>[,<cond> ...]) <fmt> - silent-drop - strict-mode { on | off } - track-sc0 <key> [table <table>] - track-sc1 <key> [table <table>] - track-sc2 <key> [table <table>] - unset-var(<var-name>) - wait-for-body time <time> [ at-least <bytes> ] The supported actions are described below. There is no limit to the number of http-response statements per instance. This directive is only available from named defaults sections, not anonymous ones. Rules defined in the defaults section are evaluated before ones in the associated proxy section. To avoid ambiguities, in this case the same defaults section cannot be used by proxies with the frontend capability and by proxies with the backend capability. It means a listen section cannot use a defaults section defining such rules. ``` Example: ``` acl key_acl res.hdr(X-Acl-Key) -m found acl myhost hdr(Host) -f myhost.lst http-response add-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl http-response del-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl ``` Example: ``` acl value res.hdr(X-Value) -m found use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found } http-response set-map(map.lst) %[src] %[res.hdr(X-Value)] if value http-response del-map(map.lst) %[src] if ! value ``` **See also :** "http-request", [section 3.4](#3.4) about userlists and [section 7](#7) about ACL usage. **http-response add-acl**(<file-name>) <key fmt> [ { if | unless } <condition> ] ``` This is used to add a new entry into an ACL. Please refer to "http-request add-acl" for a complete description. ``` **http-response add-header** <name> <fmt> [ { if | unless } <condition> ] ``` This appends an HTTP header field whose name is specified in <name> and whose value is defined by <fmt>. Please refer to "[http-request add-header](#http-request%20add-header)" for a complete description. ``` **http-response allow** [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and lets the response pass the check. No further "http-response" rules are evaluated for the current section. ``` **http-response cache-store** <name> [ { if | unless } <condition> ] ``` See [section 6.2](#6.2) about cache setup. ``` **http-response capture** <sample> id <id> [ { if | unless } <condition> ] ``` This captures sample expression <sample> from the response buffer, and converts it to a string. The resulting string is stored into the next request "[capture](#capture)" slot, so it will possibly appear next to some captured HTTP headers. It will then automatically appear in the logs, and it will be possible to extract it using sample fetch rules to feed it into headers or anything. Please check [section 7.3](#7.3) (Fetching samples) and "[capture response header](#capture%20response%20header)" for more information. The keyword "id" is the id of the capture slot which is used for storing the string. The capture slot must be defined in an associated frontend. This is useful to run captures in backends. The slot id can be declared by a previous directive "[http-response capture](#http-response%20capture)" or with the "[declare capture](#declare%20capture)" keyword. When using this action in a backend, double check that the relevant frontend(s) have the required capture slots otherwise, this rule will be ignored at run time. This can't be detected at configuration parsing time due to HAProxy's ability to dynamically resolve backend name at runtime. ``` **http-response del-acl**(<file-name>) <key fmt> [ { if | unless } <condition> ] ``` This is used to delete an entry from an ACL. Please refer to "http-request del-acl" for a complete description. ``` **http-response del-header** <name> [ -m <meth> ] [ { if | unless } <condition> ] ``` This removes all HTTP header fields whose name is specified in <name>. Please refer to "[http-request del-header](#http-request%20del-header)" for a complete description. ``` **http-response del-map**(<file-name>) <key fmt> [ { if | unless } <condition> ] ``` This is used to delete an entry from a MAP. Please refer to "http-request del-map" for a complete description. ``` **http-response deny** [deny\_status <status>] [ { if | unless } <condition> ] **http-response deny** [ { status | deny\_status } <code>] [content-type <type>] [ { default-errorfiles | errorfile <file> | errorfiles <name> | file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] [ hdr <name> <fmt> ]\* [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and immediately rejects the response. By default an HTTP 502 error is returned. But the response may be customized using same syntax than "[http-response return](#http-response%20return)" rules. Thus, see "[http-response return](#http-response%20return)" for details. For compatibility purpose, when no argument is defined, or only "deny_status", the argument "default-errorfiles" is implied. It means "http-response deny [deny_status <status>]" is an alias of "http-response deny [status <status>] default-errorfiles". No further "http-response" rules are evaluated. See also "[http-response return](#http-response%20return)". ``` **http-response redirect** <rule> [ { if | unless } <condition> ] ``` This performs an HTTP redirection based on a redirect rule. This supports a format string similarly to "[http-request redirect](#http-request%20redirect)" rules, with the exception that only the "location" type of redirect is possible on the response. See the "[redirect](#redirect)" keyword for the rule's syntax. When a redirect rule is applied during a response, connections to the server are closed so that no data can be forwarded from the server to the client. ``` **http-response replace-header** <name> <regex-match> <replace-fmt> [ { if | unless } <condition> ] ``` This works like "[http-request replace-header](#http-request%20replace-header)" except that it works on the server's response instead of the client's request. ``` Example: ``` http-response replace-header Set-Cookie (C=[^;]*);(.*) \1;ip=%bi;\2 # applied to: Set-Cookie: C=1; expires=Tue, 14-Jun-2016 01:40:45 GMT # outputs: Set-Cookie: C=1;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT # assuming the backend IP is 192.168.1.20. ``` **http-response replace-value** <name> <regex-match> <replace-fmt> [ { if | unless } <condition> ] ``` This works like "[http-request replace-value](#http-request%20replace-value)" except that it works on the server's response instead of the client's request. ``` Example: ``` http-response replace-value Cache-control ^public$ private # applied to: Cache-Control: max-age=3600, public # outputs: Cache-Control: max-age=3600, private ``` **http-response return** [status <code>] [content-type <type>] [ { default-errorfiles | errorfile <file> | errorfiles <name> | file <file> | lf-file <file> | string <str> | lf-string <fmt> } ] [ hdr <name> <value> ]\* [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and immediately returns a response. Please refer to "[http-request return](#http-request%20return)" for a complete description. No further "http-response" rules are evaluated. ``` **http-response sc-inc-gpc**(<idx>,<sc-id>) [ { if | unless } <condition> ] **http-response sc-inc-gpc0**(<sc-id>) [ { if | unless } <condition> ] **http-response sc-inc-gpc1**(<sc-id>) [ { if | unless } <condition> ] ``` These actions increment the General Purppose Counters according to the sticky counter designated by <sc-id>. Please refer to "[http-request sc-inc-gpc](#http-request%20sc-inc-gpc)", "[http-request sc-inc-gpc0](#http-request%20sc-inc-gpc0)" and "[http-request sc-inc-gpc1](#http-request%20sc-inc-gpc1)" for a complete description. ``` **http-response sc-set-gpt**(<idx>,<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] **http-response sc-set-gpt0**(<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] **http-response sc-set-gpt0**(<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] ``` These actions set the 32-bit unsigned General Purpose Tags according to the sticky counter designated by <sc-id>. Please refer to "http-request sc-inc-gpt" and "http-request sc-inc-gpt0" for a complete description. ``` **http-response send-spoe-group** <engine-name> <group-name> [ { if | unless } <condition> ] ``` This action is used to trigger sending of a group of SPOE messages. Please refer to "[http-request send-spoe-group](#http-request%20send-spoe-group)" for a complete description. ``` **http-response set-bandwidth-limit** <name> [limit <expr>] [period <expr>] [ { if | unless } <condition> ] ``` This action is used to enable the bandwidth limitation filter <name>, either on the upload or download direction depending on the filter type. Please refer to "[http-request set-bandwidth-limit](#http-request%20set-bandwidth-limit)" for a complete description. ``` **http-response set-header** <name> <fmt> [ { if | unless } <condition> ] ``` This does the same as "[http-response add-header](#http-response%20add-header)" except that the header name is first removed if it existed. This is useful when passing security information to the server, where the header must not be manipulated by external users. ``` **http-response set-log-level** <level> [ { if | unless } <condition> ] ``` This is used to change the log level of the current response. Please refer to "[http-request set-log-level](#http-request%20set-log-level)" for a complete description. ``` **http-response set-map**(<file-name>) <key fmt> <value fmt> ``` This is used to add a new entry into a MAP. Please refer to "http-request set-map" for a complete description. ``` **http-response set-mark** <mark> [ { if | unless } <condition> ] ``` This action is used to set the Netfilter/IPFW MARK in all packets sent to the client to the value passed in <mark> on platforms which support it. Please refer to "[http-request set-mark](#http-request%20set-mark)" for a complete description. ``` **http-response set-nice** <nice> [ { if | unless } <condition> ] ``` This sets the "[nice](#nice)" factor of the current request being processed. Please refer to "[http-request set-nice](#http-request%20set-nice)" for a complete description. ``` **http-response set-status** <status> [reason <str>] [ { if | unless } <condition> ] ``` This replaces the response status code with <status> which must be an integer between 100 and 999. Optionally, a custom reason text can be provided defined by <str>, or the default reason for the specified code will be used as a fallback. ``` Example: ``` # return "431 Request Header Fields Too Large" http-response set-status 431 # return "503 Slow Down", custom reason http-response set-status 503 reason "Slow Down". ``` **http-response set-tos** <tos> [ { if | unless } <condition> ] ``` This is used to set the TOS or DSCP field value of packets sent to the client to the value passed in <tos> on platforms which support this. Please refer to "[http-request set-tos](#http-request%20set-tos)" for a complete description. http-response set-var(<var-name>[,<cond> ...]) <expr> [ { if | unless } <condition> ] http-response set-var-fmt(<var-name>[,<cond> ...]) <fmt> [ { if | unless } <condition> ] This is used to set the contents of a variable. The variable is declared inline. Please refer to "http-request set-var" and "http-request set-var-fmt" for a complete description. ``` **http-response silent-drop** [ rst-ttl <ttl> ] [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and makes the client-facing connection suddenly disappear using a system-dependent way that tries to prevent the client from being notified. Please refer to "[http-request silent-drop](#http-request%20silent-drop)" for a complete description. ``` **http-response strict-mode** { on | off } [ { if | unless } <condition> ] ``` This enables or disables the strict rewriting mode for following rules. Please refer to "[http-request strict-mode](#http-request%20strict-mode)" for a complete description. ``` **http-response track-sc0** <key> [table <table>] [ { if | unless } <condition> ] **http-response track-sc1** <key> [table <table>] [ { if | unless } <condition> ] **http-response track-sc2** <key> [table <table>] [ { if | unless } <condition> ] ``` This enables tracking of sticky counters from current connection. Please refer to "[http-request track-sc0](#http-request%20track-sc0)", "[http-request track-sc1](#http-request%20track-sc1)" and "http-request track-sc2" for a complete description. ``` **http-response unset-var**(<var-name>) [ { if | unless } <condition> ] ``` This is used to unset a variable. See "http-request set-var" for details about <var-name>. ``` **http-response wait-for-body time** <time> [ at-least <bytes> ] [ { if | unless } <condition> ] ``` This will delay the processing of the response waiting for the payload for at most <time> milliseconds. Please refer to "[http-request wait-for-body](#http-request%20wait-for-body)" for a complete description. ``` **http-reuse** { never | safe | aggressive | always } ``` Declare how idle HTTP connections may be shared between requests ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | ``` By default, a connection established between HAProxy and the backend server which is considered safe for reuse is moved back to the server's idle connections pool so that any other request can make use of it. This is the "safe" strategy below. The argument indicates the desired connection reuse strategy : - "never" : idle connections are never shared between sessions. This mode may be enforced to cancel a different strategy inherited from a defaults section or for troubleshooting. For example, if an old bogus application considers that multiple requests over the same connection come from the same client and it is not possible to fix the application, it may be desirable to disable connection sharing in a single backend. An example of such an application could be an old HAProxy using cookie insertion in tunnel mode and not checking any request past the first one. - "safe" : this is the default and the recommended strategy. The first request of a session is always sent over its own connection, and only subsequent requests may be dispatched over other existing connections. This ensures that in case the server closes the connection when the request is being sent, the browser can decide to silently retry it. Since it is exactly equivalent to regular keep-alive, there should be no side effects. There is also a special handling for the connections using protocols subject to Head-of-line blocking (backend with h2 or fcgi). In this case, when at least one stream is processed, the used connection is reserved to handle streams of the same session. When no more streams are processed, the connection is released and can be reused. - "aggressive" : this mode may be useful in webservices environments where all servers are not necessarily known and where it would be appreciable to deliver most first requests over existing connections. In this case, first requests are only delivered over existing connections that have been reused at least once, proving that the server correctly supports connection reuse. It should only be used when it's sure that the client can retry a failed request once in a while and where the benefit of aggressive connection reuse significantly outweighs the downsides of rare connection failures. - "always" : this mode is only recommended when the path to the server is known for never breaking existing connections quickly after releasing them. It allows the first request of a session to be sent to an existing connection. This can provide a significant performance increase over the "safe" strategy when the backend is a cache farm, since such components tend to show a consistent behavior and will benefit from the connection sharing. It is recommended that the "[http-keep-alive](#option%20http-keep-alive)" timeout remains low in this mode so that no dead connections remain usable. In most cases, this will lead to the same performance gains as "aggressive" but with more risks. It should only be used when it improves the situation over "aggressive". When http connection sharing is enabled, a great care is taken to respect the connection properties and compatibility. Indeed, some properties are specific and it is not possibly to reuse it blindly. Those are the SSL SNI, source and destination address and proxy protocol block. A connection is reused only if it shares the same set of properties with the request. Also note that connections with certain bogus authentication schemes (relying on the connection) like NTLM are marked private and never shared. A connection pool is involved and configurable with "[pool-max-conn](#pool-max-conn)". Note: connection reuse improves the accuracy of the "server maxconn" setting, because almost no new connection will be established while idle connections remain available. This is particularly true with the "always" strategy. The rules to decide to keep an idle connection opened or to close it after processing are also governed by the "[tune.pool-low-fd-ratio](#tune.pool-low-fd-ratio)" (default: 20%) and "[tune.pool-high-fd-ratio](#tune.pool-high-fd-ratio)" (default: 25%). These correspond to the percentage of total file descriptors spent in idle connections above which haproxy will respectively refrain from keeping a connection opened after a response, and actively kill idle connections. Some setups using a very high ratio of idle connections, either because of too low a global "maxconn", or due to a lot of HTTP/2 or HTTP/3 traffic on the frontend (few connections) but HTTP/1 connections on the backend, may observe a lower reuse rate because too few connections are kept open. It may be desirable in this case to adjust such thresholds or simply to increase the global "maxconn" value. Similarly, when thread groups are explicitly enabled, it is important to understand that idle connections are only usable between threads from a same group. As such it may happen that unfair load between groups leads to more idle connections being needed, causing a lower reuse rate. The same solution may then be applied (increase global "maxconn" or increase pool ratios). ``` **See also :** "[option http-keep-alive](#option%20http-keep-alive)", "server maxconn", "[thread-groups](#thread-groups)", "[tune.pool-high-fd-ratio](#tune.pool-high-fd-ratio)", "[tune.pool-low-fd-ratio](#tune.pool-low-fd-ratio)" **http-send-name-header** [<header>] ``` Add the server name to a request. Use the header string given by <header> ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <header> The header string to use to send the server name ``` ``` The "[http-send-name-header](#http-send-name-header)" statement causes the header field named <header> to be set to the name of the target server at the moment the request is about to be sent on the wire. Any existing occurrences of this header are removed. Upon retries and redispatches, the header field is updated to always reflect the server being attempted to connect to. Given that this header is modified very late in the connection setup, it may have unexpected effects on already modified headers. For example using it with transport-level header such as connection, content-length, transfer-encoding and so on will likely result in invalid requests being sent to the server. Additionally it has been reported that this directive is currently being used as a way to overwrite the Host header field in outgoing requests; while this trick has been known to work as a side effect of the feature for some time, it is not officially supported and might possibly not work anymore in a future version depending on the technical difficulties this feature induces. A long-term solution instead consists in fixing the application which required this trick so that it binds to the correct host name. ``` **See also :** "server" **id** <value> ``` Set a persistent ID to a proxy. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | yes | Arguments : none ``` Set a persistent ID for the proxy. This ID must be unique and positive. An unused ID will automatically be assigned if unset. The first assigned value will be 1. This ID is currently only returned in statistics. ``` **ignore-persist** { if | unless } <condition> ``` Declare a condition to ignore persistence ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | ``` By default, when cookie persistence is enabled, every requests containing the cookie are unconditionally persistent (assuming the target server is up and running). The "[ignore-persist](#ignore-persist)" statement allows one to declare various ACL-based conditions which, when met, will cause a request to ignore persistence. This is sometimes useful to load balance requests for static files, which often don't require persistence. This can also be used to fully disable persistence for a specific User-Agent (for example, some web crawler bots). The persistence is ignored when an "if" condition is met, or unless an "unless" condition is met. ``` Example: ``` acl url_static path_beg /static /images /img /css acl url_static path_end .gif .png .jpg .css .js ignore-persist if url_static ``` **See also :** "[force-persist](#force-persist)", "cookie", and [section 7](#7) about ACL usage. **load-server-state-from-file** { global | local | none } ``` Allow seamless reload of HAProxy ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | ``` This directive points HAProxy to a file where server state from previous running process has been saved. That way, when starting up, before handling traffic, the new process can apply old states to servers exactly has if no reload occurred. The purpose of the "[load-server-state-from-file](#load-server-state-from-file)" directive is to tell HAProxy which file to use. For now, only 2 arguments to either prevent loading state or load states from a file containing all backends and servers. The state file can be generated by running the command "show servers state" over the stats socket and redirect output. The format of the file is versioned and is very specific. To understand it, please read the documentation of the "show servers state" command (chapter 9.3 of Management Guide). ``` Arguments: ``` global load the content of the file pointed by the global directive named "[server-state-file](#server-state-file)". local load the content of the file pointed by the directive "[server-state-file-name](#server-state-file-name)" if set. If not set, then the backend name is used as a file name. none don't load any stat for this backend ``` ``` Notes: - server's IP address is preserved across reloads by default, but the order can be changed thanks to the server's "[init-addr](#init-addr)" setting. This means that an IP address change performed on the CLI at run time will be preserved, and that any change to the local resolver (e.g. /etc/hosts) will possibly not have any effect if the state file is in use. - server's weight is applied from previous running process unless it has has changed between previous and new configuration files. ``` Example: ``` Minimal configurationglobal stats socket /tmp/socket server-state-file /tmp/server_state defaults load-server-state-from-file global backend bk server s1 127.0.0.1:22 check weight 11 server s2 127.0.0.1:22 check weight 12 ``` ``` Then one can run : socat /tmp/socket - <<< "show servers state" > /tmp/server_state Content of the file /tmp/server_state would be like this: 1 # <field names skipped for the doc example> 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0 ``` Example: ``` Minimal configurationglobal stats socket /tmp/socket server-state-base /etc/haproxy/states defaults load-server-state-from-file local backend bk server s1 127.0.0.1:22 check weight 11 server s2 127.0.0.1:22 check weight 12 ``` ``` Then one can run : socat /tmp/socket - <<< "show servers state bk" > /etc/haproxy/states/bk Content of the file /etc/haproxy/states/bk would be like this: 1 # <field names skipped for the doc example> 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0 ``` **See also:** "[server-state-file](#server-state-file)", "[server-state-file-name](#server-state-file-name)", and "show servers state" **log global** **log** <address> [len <length>] [format <format>] [sample <ranges>:<sample\_size>] <facility> [<level> [<minlevel>]] **no log** ``` Enable per-instance logging of events and traffic. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | ``` Prefix : no should be used when the logger list must be flushed. For example, if you don't want to inherit from the default logger list. This prefix does not allow arguments. ``` Arguments : ``` global should be used when the instance's logging parameters are the same as the global ones. This is the most common usage. "global" replaces <address>, <facility> and <level> with those of the log entries found in the "global" section. Only one "log global" statement may be used per instance, and this form takes no other parameter. <address> indicates where to send the logs. It takes the same format as for the "global" section's logs, and can be one of : - An IPv4 address optionally followed by a colon (':') and a UDP port. If no port is specified, 514 is used by default (the standard syslog port). - An IPv6 address followed by a colon (':') and optionally a UDP port. If no port is specified, 514 is used by default (the standard syslog port). - A filesystem path to a UNIX domain socket, keeping in mind considerations for chroot (be sure the path is accessible inside the chroot) and uid/gid (be sure the path is appropriately writable). - A file descriptor number in the form "fd@<number>", which may point to a pipe, terminal, or socket. In this case unbuffered logs are used and one writev() call per log is performed. This is a bit expensive but acceptable for most workloads. Messages sent this way will not be truncated but may be dropped, in which case the DroppedLogs counter will be incremented. The writev() call is atomic even on pipes for messages up to PIPE_BUF size, which POSIX recommends to be at least 512 and which is 4096 bytes on most modern operating systems. Any larger message may be interleaved with messages from other processes. Exceptionally for debugging purposes the file descriptor may also be directed to a file, but doing so will significantly slow HAProxy down as non-blocking calls will be ignored. Also there will be no way to purge nor rotate this file without restarting the process. Note that the configured syslog format is preserved, so the output is suitable for use with a TCP syslog server. See also the "short" and "raw" formats below. - "stdout" / "stderr", which are respectively aliases for "fd@1" and "fd@2", see above. - A ring buffer in the form "ring@<name>", which will correspond to an in-memory ring buffer accessible over the CLI using the "show events" command, which will also list existing rings and their sizes. Such buffers are lost on reload or restart but when used as a complement this can help troubleshooting by having the logs instantly available. - An explicit stream address prefix such as "tcp@","tcp6@", "tcp4@" or "uxst@" will allocate an implicit ring buffer with a stream forward server targeting the given address. You may want to reference some environment variables in the address parameter, see [section 2.3](#2.3) about environment variables. <length> is an optional maximum line length. Log lines larger than this value will be truncated before being sent. The reason is that syslog servers act differently on log line length. All servers support the default value of 1024, but some servers simply drop larger lines while others do log them. If a server supports long lines, it may make sense to set this value here in order to avoid truncating long lines. Similarly, if a server drops long lines, it is preferable to truncate them before sending them. Accepted values are 80 to 65535 inclusive. The default value of 1024 is generally fine for all standard usages. Some specific cases of long captures or JSON-formatted logs may require larger values. <ranges> A list of comma-separated ranges to identify the logs to sample. This is used to balance the load of the logs to send to the log server. The limits of the ranges cannot be null. They are numbered from 1. The size or period (in number of logs) of the sample must be set with <sample_size> parameter. <sample_size> The size of the sample in number of logs to consider when balancing their logging loads. It is used to balance the load of the logs to send to the syslog server. This size must be greater or equal to the maximum of the high limits of the ranges. (see also <ranges> parameter). <format> is the log format used when generating syslog messages. It may be one of the following : local Analog to rfc3164 syslog message format except that hostname field is stripped. This is the default. Note: option "[log-send-hostname](#log-send-hostname)" switches the default to rfc3164. rfc3164 The RFC3164 syslog message format. (https://tools.ietf.org/html/rfc3164) rfc5424 The RFC5424 syslog message format. (https://tools.ietf.org/html/rfc5424) priority A message containing only a level plus syslog facility between angle brackets such as '<63>', followed by the text. The PID, date, time, process name and system name are omitted. This is designed to be used with a local log server. short A message containing only a level between angle brackets such as '<3>', followed by the text. The PID, date, time, process name and system name are omitted. This is designed to be used with a local log server. This format is compatible with what the systemd logger consumes. timed A message containing only a level between angle brackets such as '<3>', followed by ISO date and by the text. The PID, process name and system name are omitted. This is designed to be used with a local log server. iso A message containing only the ISO date, followed by the text. The PID, process name and system name are omitted. This is designed to be used with a local log server. raw A message containing only the text. The level, PID, date, time, process name and system name are omitted. This is designed to be used in containers or during development, where the severity only depends on the file descriptor used (stdout/stderr). <facility> must be one of the 24 standard syslog facilities : kern user mail daemon auth syslog lpr news uucp cron auth2 ftp ntp audit alert cron2 local0 local1 local2 local3 local4 local5 local6 local7 Note that the facility is ignored for the "short" and "raw" formats, but still required as a positional field. It is recommended to use "[daemon](#daemon)" in this case to make it clear that it's only supposed to be used locally. <level> is optional and can be specified to filter outgoing messages. By default, all messages are sent. If a level is specified, only messages with a severity at least as important as this level will be sent. An optional minimum level can be specified. If it is set, logs emitted with a more severe level than this one will be capped to this level. This is used to avoid sending "emerg" messages on all terminals on some default syslog configurations. Eight levels are known : emerg alert crit err warning notice info debug ``` ``` It is important to keep in mind that it is the frontend which decides what to log from a connection, and that in case of content switching, the log entries from the backend will be ignored. Connections are logged at level "info". However, backend log declaration define how and where servers status changes will be logged. Level "notice" will be used to indicate a server going up, "warning" will be used for termination signals and definitive service termination, and "alert" will be used for when a server goes down. Note : According to RFC3164, messages are truncated to 1024 bytes before being emitted. ``` Example : ``` log global log stdout format short daemon # send log to systemd log stdout format raw daemon # send everything to stdout log stderr format raw daemon notice # send important events to stderr log 127.0.0.1:514 local0 notice # only send important events log [email protected]:514 local0 notice notice # same but limit output # level and send in tcp log "${LOCAL_SYSLOG}:514" local0 notice # send to local server ``` **log-format** <string> ``` Specifies the log format string to use for traffic logs ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | ``` This directive specifies the log format string that will be used for all logs resulting from traffic passing through the frontend using this line. If the directive is used in a defaults section, all subsequent frontends will use the same log format. Please see [section 8.2.4](#8.2.4) which covers the log format string in depth. A specific log-format used only in case of connection error can also be defined, see the "[error-log-format](#error-log-format)" option. "[log-format](#log-format)" directive overrides previous "[option tcplog](#option%20tcplog)", "[log-format](#log-format)", "[option httplog](#option%20httplog)" and "[option httpslog](#option%20httpslog)" directives. ``` **log-format-sd** <string> ``` Specifies the RFC5424 structured-data log format string ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | ``` This directive specifies the RFC5424 structured-data log format string that will be used for all logs resulting from traffic passing through the frontend using this line. If the directive is used in a defaults section, all subsequent frontends will use the same log format. Please see [section 8.2.4](#8.2.4) which covers the log format string in depth. See https://tools.ietf.org/html/rfc5424#section-6.3 for more information about the RFC5424 structured-data part. Note : This log format string will be used only for loggers that have set log format to "rfc5424". ``` Example : ``` log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"] ``` **log-tag** <string> ``` Specifies the log tag to use for all outgoing logs ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | ``` Sets the tag field in the syslog header to this string. It defaults to the log-tag set in the global section, otherwise the program name as launched from the command line, which usually is "HAProxy". Sometimes it can be useful to differentiate between multiple processes running on the same host, or to differentiate customer instances running in the same process. In the backend, logs about servers up/down will use this tag. As a hint, it can be convenient to set a log-tag related to a hosted customer in a defaults section then put all the frontends and backends for that customer, then start another customer in a new defaults section. See also the global "log-tag" directive. ``` **max-keep-alive-queue** <value> ``` Set the maximum server queue size for maintaining keep-alive connections ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | ``` HTTP keep-alive tries to reuse the same server connection whenever possible, but sometimes it can be counter-productive, for example if a server has a lot of connections while other ones are idle. This is especially true for static servers. The purpose of this setting is to set a threshold on the number of queued connections at which HAProxy stops trying to reuse the same server and prefers to find another one. The default value, -1, means there is no limit. A value of zero means that keep-alive requests will never be queued. For very close servers which can be reached with a low latency and which are not sensible to breaking keep-alive, a low value is recommended (e.g. local static server can use a value of 10 or less). For remote servers suffering from a high latency, higher values might be needed to cover for the latency and/or the cost of picking a different server. Note that this has no impact on responses which are maintained to the same server consecutively to a 401 response. They will still go to the same server even if they have to be queued. ``` **See also :** "[option http-server-close](#option%20http-server-close)", "[option prefer-last-server](#option%20prefer-last-server)", server "maxconn" and cookie persistence. **max-session-srv-conns** <nb> ``` Set the maximum number of outgoing connections we can keep idling for a given client session. The default is 5 (it precisely equals MAX_SRV_LIST which is defined at build time). ``` **maxconn** <conns> ``` Fix the maximum number of concurrent connections on a frontend ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <conns> is the maximum number of concurrent connections the frontend will accept to serve. Excess connections will be queued by the system in the socket's listen queue and will be served once a connection closes. ``` ``` If the system supports it, it can be useful on big sites to raise this limit very high so that HAProxy manages connection queues, instead of leaving the clients with unanswered connection attempts. This value should not exceed the global maxconn. Also, keep in mind that a connection contains two buffers of tune.bufsize (16kB by default) each, as well as some other data resulting in about 33 kB of RAM being consumed per established connection. That means that a medium system equipped with 1GB of RAM can withstand around 20000-25000 concurrent connections if properly tuned. Also, when <conns> is set to large values, it is possible that the servers are not sized to accept such loads, and for this reason it is generally wise to assign them some reasonable connection limits. When this value is set to zero, which is the default, the global "maxconn" value is used. ``` **See also :** "server", global section's "maxconn", "[fullconn](#fullconn)" **mode** { tcp|http } ``` Set the running mode or protocol of the instance ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` tcp The instance will work in pure TCP mode. A full-duplex connection will be established between clients and servers, and no layer 7 examination will be performed. This is the default mode. It should be used for SSL, SSH, SMTP, ... http The instance will work in HTTP mode. The client request will be analyzed in depth before connecting to any server. Any request which is not RFC-compliant will be rejected. Layer 7 filtering, processing and switching will be possible. This is the mode which brings HAProxy most of its value. ``` ``` When doing content switching, it is mandatory that the frontend and the backend are in the same mode (generally HTTP), otherwise the configuration will be refused. ``` Example : ``` defaults http_instances mode http ``` **monitor fail** { if | unless } <condition> ``` Add a condition to report a failure to a monitor HTTP request. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | no | Arguments : ``` if <cond> the monitor request will fail if the condition is satisfied, and will succeed otherwise. The condition should describe a combined test which must induce a failure if all conditions are met, for instance a low number of servers both in a backend and its backup. unless <cond> the monitor request will succeed only if the condition is satisfied, and will fail otherwise. Such a condition may be based on a test on the presence of a minimum number of active servers in a list of backends. ``` ``` This statement adds a condition which can force the response to a monitor request to report a failure. By default, when an external component queries the URI dedicated to monitoring, a 200 response is returned. When one of the conditions above is met, HAProxy will return 503 instead of 200. This is very useful to report a site failure to an external component which may base routing advertisements between multiple sites on the availability reported by HAProxy. In this case, one would rely on an ACL involving the "nbsrv" criterion. Note that "[monitor fail](#monitor%20fail)" only works in HTTP mode. Both status messages may be tweaked using "errorfile" or "[errorloc](#errorloc)" if needed. ``` Example: ``` frontend www mode http acl site_dead nbsrv(dynamic) lt 2 acl site_dead nbsrv(static) lt 2 monitor-uri /site_alive monitor fail if site_dead ``` **See also :** "[monitor-uri](#monitor-uri)", "errorfile", "[errorloc](#errorloc)" **monitor-uri** <uri> ``` Intercept a URI used by external components' monitor requests ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <uri> is the exact URI which we want to intercept to return HAProxy's health status instead of forwarding the request. ``` ``` When an HTTP request referencing <uri> will be received on a frontend, HAProxy will not forward it nor log it, but instead will return either "HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on failure conditions defined with "[monitor fail](#monitor%20fail)". This is normally enough for any front-end HTTP probe to detect that the service is UP and running without forwarding the request to a backend server. Note that the HTTP method, the version and all headers are ignored, but the request must at least be valid at the HTTP level. This keyword may only be used with an HTTP-mode frontend. Monitor requests are processed very early, just after the request is parsed and even before any "http-request". The only rulesets applied before are the tcp-request ones. They cannot be logged either, and it is the intended purpose. Only one URI may be configured for monitoring; when multiple "[monitor-uri](#monitor-uri)" statements are present, the last one will define the URI to be used. They are only used to report HAProxy's health to an upper component, nothing more. However, it is possible to add any number of conditions using "[monitor fail](#monitor%20fail)" and ACLs so that the result can be adjusted to whatever check can be imagined (most often the number of available servers in a backend). Note: if <uri> starts by a slash ('/'), the matching is performed against the request's path instead of the request's uri. It is a workaround to let the HTTP/2 requests match the monitor-uri. Indeed, in HTTP/2, clients are encouraged to send absolute URIs only. ``` Example : ``` # Use /haproxy\_test to report HAProxy's status frontend www mode http monitor-uri /haproxy_test ``` **See also :** "[monitor fail](#monitor%20fail)" **option abortonclose** **no option abortonclose** ``` Enable or disable early dropping of aborted requests pending in queues. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` In presence of very high loads, the servers will take some time to respond. The per-instance connection queue will inflate, and the response time will increase respective to the size of the queue times the average per-session response time. When clients will wait for more than a few seconds, they will often hit the "STOP" button on their browser, leaving a useless request in the queue, and slowing down other users, and the servers as well, because the request will eventually be served, then aborted at the first error encountered while delivering the response. As there is no way to distinguish between a full STOP and a simple output close on the client side, HTTP agents should be conservative and consider that the client might only have closed its output channel while waiting for the response. However, this introduces risks of congestion when lots of users do the same, and is completely useless nowadays because probably no client at all will close the session while waiting for the response. Some HTTP agents support this behavior (Squid, Apache, HAProxy), and others do not (TUX, most hardware-based load balancers). So the probability for a closed input channel to represent a user hitting the "STOP" button is close to 100%, and the risk of being the single component to break rare but valid traffic is extremely low, which adds to the temptation to be able to abort a session early while still not served and not pollute the servers. In HAProxy, the user can choose the desired behavior using the option "[abortonclose](#option%20abortonclose)". By default (without the option) the behavior is HTTP compliant and aborted requests will be served. But when the option is specified, a session with an incoming channel closed will be aborted while it is still possible, either pending in the queue for a connection slot, or during the connection establishment if the server has not yet acknowledged the connection request. This considerably reduces the queue size and the load on saturated servers when users are tempted to click on STOP, which in turn reduces the response time for other users. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[timeout queue](#timeout%20queue)" and server's "maxconn" and "[maxqueue](#maxqueue)" parameters **option accept-invalid-http-request** **no option accept-invalid-http-request** ``` Enable or disable relaxing of HTTP request parsing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` By default, HAProxy complies with RFC7230 in terms of message parsing. This means that invalid characters in header names are not permitted and cause an error to be returned to the client. This is the desired behavior as such forbidden characters are essentially used to build attacks exploiting server weaknesses, and bypass security filtering. Sometimes, a buggy browser or server will emit invalid header names for whatever reason (configuration, implementation) and the issue will not be immediately fixed. In such a case, it is possible to relax HAProxy's header name parser to accept any character even if that does not make sense, by specifying this option. Similarly, the list of characters allowed to appear in a URI is well defined by RFC3986, and chars 0-31, 32 (space), 34 ('"'), 60 ('<'), 62 ('>'), 92 ('\'), 94 ('^'), 96 ('`'), 123 ('{'), 124 ('|'), 125 ('}'), 127 (delete) and anything above are not allowed at all. HAProxy always blocks a number of them (0..32, 127). The remaining ones are blocked by default unless this option is enabled. This option also relaxes the test on the HTTP version, it allows HTTP/0.9 requests to pass through (no version specified), as well as different protocol names (e.g. RTSP), and multiple digits for both the major and the minor version. This option should never be enabled by default as it hides application bugs and open security breaches. It should only be deployed after a problem has been confirmed. When this option is enabled, erroneous header names will still be accepted in requests, but the complete request will be captured in order to permit later analysis using the "show errors" request on the UNIX stats socket. Similarly, requests containing invalid chars in the URI part will be logged. Doing this also helps confirming that the issue has been solved. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option accept-invalid-http-response](#option%20accept-invalid-http-response)" and "show errors" on the stats socket. **option accept-invalid-http-response** **no option accept-invalid-http-response** ``` Enable or disable relaxing of HTTP response parsing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` By default, HAProxy complies with RFC7230 in terms of message parsing. This means that invalid characters in header names are not permitted and cause an error to be returned to the client. This is the desired behavior as such forbidden characters are essentially used to build attacks exploiting server weaknesses, and bypass security filtering. Sometimes, a buggy browser or server will emit invalid header names for whatever reason (configuration, implementation) and the issue will not be immediately fixed. In such a case, it is possible to relax HAProxy's header name parser to accept any character even if that does not make sense, by specifying this option. This option also relaxes the test on the HTTP version format, it allows multiple digits for both the major and the minor version. This option should never be enabled by default as it hides application bugs and open security breaches. It should only be deployed after a problem has been confirmed. When this option is enabled, erroneous header names will still be accepted in responses, but the complete response will be captured in order to permit later analysis using the "show errors" request on the UNIX stats socket. Doing this also helps confirming that the issue has been solved. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option accept-invalid-http-request](#option%20accept-invalid-http-request)" and "show errors" on the stats socket. **option allbackups** **no option allbackups** ``` Use either all backup servers at a time or only the first one ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` By default, the first operational backup server gets all traffic when normal servers are all down. Sometimes, it may be preferred to use multiple backups at once, because one will not be enough. When "[option allbackups](#option%20allbackups)" is enabled, the load balancing will be performed among all backup servers when all normal ones are unavailable. The same load balancing algorithm will be used and the servers' weights will be respected. Thus, there will not be any priority order between the backup servers anymore. This option is mostly used with static server farms dedicated to return a "sorry" page when an application is completely offline. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **option checkcache** **no option checkcache** ``` Analyze all server responses and block responses with cacheable cookies ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` Some high-level frameworks set application cookies everywhere and do not always let enough control to the developer to manage how the responses should be cached. When a session cookie is returned on a cacheable object, there is a high risk of session crossing or stealing between users traversing the same caches. In some situations, it is better to block the response than to let some sensitive session information go in the wild. The option "[checkcache](#option%20checkcache)" enables deep inspection of all server responses for strict compliance with HTTP specification in terms of cacheability. It carefully checks "Cache-control", "Pragma" and "Set-cookie" headers in server response to check if there's a risk of caching a cookie on a client-side proxy. When this option is enabled, the only responses which can be delivered to the client are : - all those without "Set-Cookie" header; - all those with a return code other than 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, 501, provided that the server has not set a "Cache-control: public" header field; - all those that result from a request using a method other than GET, HEAD, OPTIONS, TRACE, provided that the server has not set a 'Cache-Control: public' header field; - those with a 'Pragma: no-cache' header - those with a 'Cache-control: private' header - those with a 'Cache-control: no-store' header - those with a 'Cache-control: max-age=0' header - those with a 'Cache-control: s-maxage=0' header - those with a 'Cache-control: no-cache' header - those with a 'Cache-control: no-cache="[set-cookie](#set-cookie)"' header - those with a 'Cache-control: no-cache="set-cookie,' header (allowing other fields after set-cookie) If a response doesn't respect these requirements, then it will be blocked just as if it was from an "[http-response deny](#http-response%20deny)" rule, with an "HTTP 502 bad gateway". The session state shows "PH--" meaning that the proxy blocked the response during headers processing. Additionally, an alert will be sent in the logs so that admins are informed that there's something to be fixed. Due to the high impact on the application, the application should be tested in depth with the option enabled before going to production. It is also a good practice to always activate it during tests, even if it is not used in production, as it will report potentially dangerous application behaviors. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **option clitcpka** **no option clitcpka** ``` Enable or disable the sending of TCP keepalive packets on the client side ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` When there is a firewall or any session-aware component between a client and a server, and when the protocol involves very long sessions with long idle periods (e.g. remote desktops), there is a risk that one of the intermediate components decides to expire a session which has remained idle for too long. Enabling socket-level TCP keep-alives makes the system regularly send packets to the other end of the connection, leaving it active. The delay between keep-alive probes is controlled by the system only and depends both on the operating system and its tuning parameters. It is important to understand that keep-alive packets are neither emitted nor received at the application level. It is only the network stacks which sees them. For this reason, even if one side of the proxy already uses keep-alives to maintain its connection alive, those keep-alive packets will not be forwarded to the other side of the proxy. Please note that this has nothing to do with HTTP keep-alive. Using option "[clitcpka](#option%20clitcpka)" enables the emission of TCP keep-alive probes on the client side of a connection, which should help when session expirations are noticed between HAProxy and a client. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option srvtcpka](#option%20srvtcpka)", "[option tcpka](#option%20tcpka)" **option contstats** ``` Enable continuous traffic statistics updates ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` By default, counters used for statistics calculation are incremented only when a session finishes. It works quite well when serving small objects, but with big ones (for example large images or archives) or with A/V streaming, a graph generated from HAProxy counters looks like a hedgehog. With this option enabled counters get incremented frequently along the session, typically every 5 seconds, which is often enough to produce clean graphs. Recounting touches a hotpath directly so it is not not enabled by default, as it can cause a lot of wakeups for very large session counts and cause a small performance drop. ``` **option disable-h2-upgrade** **no option disable-h2-upgrade** ``` Enable or disable the implicit HTTP/2 upgrade from an HTTP/1.x client connection. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` By default, HAProxy is able to implicitly upgrade an HTTP/1.x client connection to an HTTP/2 connection if the first request it receives from a given HTTP connection matches the HTTP/2 connection preface (i.e. the string "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n"). This way, it is possible to support HTTP/1.x and HTTP/2 clients on a non-SSL connections. This option must be used to disable the implicit upgrade. Note this implicit upgrade is only supported for HTTP proxies, thus this option too. Note also it is possible to force the HTTP/2 on clear connections by specifying "proto h2" on the bind line. Finally, this option is applied on all bind lines. To disable implicit HTTP/2 upgrades for a specific bind line, it is possible to use "proto h1". If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **option dontlog-normal** **no option dontlog-normal** ``` Enable or disable logging of normal, successful connections ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` There are large sites dealing with several thousand connections per second and for which logging is a major pain. Some of them are even forced to turn logs off and cannot debug production issues. Setting this option ensures that normal connections, those which experience no error, no timeout, no retry nor redispatch, will not be logged. This leaves disk space for anomalies. In HTTP mode, the response status code is checked and return codes 5xx will still be logged. It is strongly discouraged to use this option as most of the time, the key to complex issues is in the normal logs which will not be logged here. If you need to separate logs, see the "[log-separate-errors](#option%20log-separate-errors)" option instead. ``` **See also :** "log", "[dontlognull](#option%20dontlognull)", "[log-separate-errors](#option%20log-separate-errors)" and [section 8](#8) about logging. **option dontlognull** **no option dontlognull** ``` Enable or disable logging of null connections ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` In certain environments, there are components which will regularly connect to various systems to ensure that they are still alive. It can be the case from another load balancer as well as from monitoring systems. By default, even a simple port probe or scan will produce a log. If those connections pollute the logs too much, it is possible to enable option "[dontlognull](#option%20dontlognull)" to indicate that a connection on which no data has been transferred will not be logged, which typically corresponds to those probes. Note that errors will still be returned to the client and accounted for in the stats. If this is not what is desired, option http-ignore-probes can be used instead. It is generally recommended not to use this option in uncontrolled environments (e.g. internet), otherwise scans and other malicious activities would not be logged. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "log", "[http-ignore-probes](#option%20http-ignore-probes)", "[monitor-uri](#monitor-uri)", and [section 8](#8) about logging. **option forwardfor** [ except <network> ] [ header <name> ] [ if-none ] ``` Enable insertion of the X-Forwarded-For header to requests sent to servers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <network> is an optional argument used to disable this option for sources matching <network> <name> an optional argument to specify a different "X-Forwarded-For" header name. ``` ``` Since HAProxy works in reverse-proxy mode, the servers see its IP address as their client address. This is sometimes annoying when the client's IP address is expected in server logs. To solve this problem, the well-known HTTP header "X-Forwarded-For" may be added by HAProxy to all requests sent to the server. This header contains a value representing the client's IP address. Since this header is always appended at the end of the existing header list, the server must be configured to always use the last occurrence of this header only. See the server's manual to find how to enable use of this standard header. Note that only the last occurrence of the header must be used, since it is really possible that the client has already brought one. The keyword "header" may be used to supply a different header name to replace the default "X-Forwarded-For". This can be useful where you might already have a "X-Forwarded-For" header from a different application (e.g. stunnel), and you need preserve it. Also if your backend server doesn't use the "X-Forwarded-For" header and requires different one (e.g. Zeus Web Servers require "X-Cluster-Client-IP"). Sometimes, a same HAProxy instance may be shared between a direct client access and a reverse-proxy access (for instance when an SSL reverse-proxy is used to decrypt HTTPS traffic). It is possible to disable the addition of the header for a known source address or network by adding the "except" keyword followed by the network address. In this case, any source IP matching the network will not cause an addition of this header. Most common uses are with private networks or 127.0.0.1. IPv4 and IPv6 are both supported. Alternatively, the keyword "if-none" states that the header will only be added if it is not present. This should only be used in perfectly trusted environment, as this might cause a security issue if headers reaching HAProxy are under the control of the end-user. This option may be specified either in the frontend or in the backend. If at least one of them uses it, the header will be added. Note that the backend's setting of the header subargument takes precedence over the frontend's if both are defined. In the case of the "if-none" argument, if at least one of the frontend or the backend does not specify it, it wants the addition to be mandatory, so it wins. ``` Example : ``` # Public HTTP address also used by stunnel on the same machine frontend www mode http option forwardfor except 127.0.0.1 # stunnel already adds the header # Those servers want the IP Address in X-Client backend www mode http option forwardfor header X-Client ``` **See also :** "[option httpclose](#option%20httpclose)", "[option http-server-close](#option%20http-server-close)", "[option http-keep-alive](#option%20http-keep-alive)" **option h1-case-adjust-bogus-client** **no option h1-case-adjust-bogus-client** ``` Enable or disable the case adjustment of HTTP/1 headers sent to bogus clients ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` There is no standard case for header names because, as stated in RFC7230, they are case-insensitive. So applications must handle them in a case- insensitive manner. But some bogus applications violate the standards and erroneously rely on the cases most commonly used by browsers. This problem becomes critical with HTTP/2 because all header names must be exchanged in lower case, and HAProxy follows the same convention. All header names are sent in lower case to clients and servers, regardless of the HTTP version. When HAProxy receives an HTTP/1 response, its header names are converted to lower case and manipulated and sent this way to the clients. If a client is known to violate the HTTP standards and to fail to process a response coming from HAProxy, it is possible to transform the lower case header names to a different format when the response is formatted and sent to the client, by enabling this option and specifying the list of headers to be reformatted using the global directives "[h1-case-adjust](#h1-case-adjust)" or "[h1-case-adjust-file](#h1-case-adjust-file)". This must only be a temporary workaround for the time it takes the client to be fixed, because clients which require such workarounds might be vulnerable to content smuggling attacks and must absolutely be fixed. Please note that this option will not affect standards-compliant clients. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also:** "[option h1-case-adjust-bogus-server](#option%20h1-case-adjust-bogus-server)", "[h1-case-adjust](#h1-case-adjust)", "[h1-case-adjust-file](#h1-case-adjust-file)". **option h1-case-adjust-bogus-server** **no option h1-case-adjust-bogus-server** ``` Enable or disable the case adjustment of HTTP/1 headers sent to bogus servers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` There is no standard case for header names because, as stated in RFC7230, they are case-insensitive. So applications must handle them in a case- insensitive manner. But some bogus applications violate the standards and erroneously rely on the cases most commonly used by browsers. This problem becomes critical with HTTP/2 because all header names must be exchanged in lower case, and HAProxy follows the same convention. All header names are sent in lower case to clients and servers, regardless of the HTTP version. When HAProxy receives an HTTP/1 request, its header names are converted to lower case and manipulated and sent this way to the servers. If a server is known to violate the HTTP standards and to fail to process a request coming from HAProxy, it is possible to transform the lower case header names to a different format when the request is formatted and sent to the server, by enabling this option and specifying the list of headers to be reformatted using the global directives "[h1-case-adjust](#h1-case-adjust)" or "[h1-case-adjust-file](#h1-case-adjust-file)". This must only be a temporary workaround for the time it takes the server to be fixed, because servers which require such workarounds might be vulnerable to content smuggling attacks and must absolutely be fixed. Please note that this option will not affect standards-compliant servers. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also:** "[option h1-case-adjust-bogus-client](#option%20h1-case-adjust-bogus-client)", "[h1-case-adjust](#h1-case-adjust)", "[h1-case-adjust-file](#h1-case-adjust-file)". **option http-buffer-request** **no option http-buffer-request** ``` Enable or disable waiting for whole HTTP request body before proceeding ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` It is sometimes desirable to wait for the body of an HTTP request before taking a decision. This is what is being done by "[balance url\_param](#balance%20url_param)" for example. The first use case is to buffer requests from slow clients before connecting to the server. Another use case consists in taking the routing decision based on the request body's contents. This option placed in a frontend or backend forces the HTTP processing to wait until either the whole body is received or the request buffer is full. It can have undesired side effects with some applications abusing HTTP by expecting unbuffered transmissions between the frontend and the backend, so this should definitely not be used by default. ``` **See also :** "[option http-no-delay](#option%20http-no-delay)", "[timeout http-request](#timeout%20http-request)", "[http-request wait-for-body](#http-request%20wait-for-body)" **option http-ignore-probes** **no option http-ignore-probes** ``` Enable or disable logging of null connections and request timeouts ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` Recently some browsers started to implement a "pre-connect" feature consisting in speculatively connecting to some recently visited web sites just in case the user would like to visit them. This results in many connections being established to web sites, which end up in 408 Request Timeout if the timeout strikes first, or 400 Bad Request when the browser decides to close them first. These ones pollute the log and feed the error counters. There was already "[option dontlognull](#option%20dontlognull)" but it's insufficient in this case. Instead, this option does the following things : - prevent any 400/408 message from being sent to the client if nothing was received over a connection before it was closed; - prevent any log from being emitted in this situation; - prevent any error counter from being incremented That way the empty connection is silently ignored. Note that it is better not to use this unless it is clear that it is needed, because it will hide real problems. The most common reason for not receiving a request and seeing a 408 is due to an MTU inconsistency between the client and an intermediary element such as a VPN, which blocks too large packets. These issues are generally seen with POST requests as well as GET with large cookies. The logs are often the only way to detect them. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "log", "[dontlognull](#option%20dontlognull)", "errorfile", and [section 8](#8) about logging. **option http-keep-alive** **no option http-keep-alive** ``` Enable or disable HTTP keep-alive from client to server ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` By default HAProxy operates in keep-alive mode with regards to persistent connections: for each connection it processes each request and response, and leaves the connection idle on both sides between the end of a response and the start of a new request. This mode may be changed by several options such as "[option http-server-close](#option%20http-server-close)" or "[option httpclose](#option%20httpclose)". This option allows to set back the keep-alive mode, which can be useful when another mode was used in a defaults section. Setting "[option http-keep-alive](#option%20http-keep-alive)" enables HTTP keep-alive mode on the client- and server- sides. This provides the lowest latency on the client side (slow network) and the fastest session reuse on the server side at the expense of maintaining idle connections to the servers. In general, it is possible with this option to achieve approximately twice the request rate that the "[http-server-close](#option%20http-server-close)" option achieves on small objects. There are mainly two situations where this option may be useful : - when the server is non-HTTP compliant and authenticates the connection instead of requests (e.g. NTLM authentication) - when the cost of establishing the connection to the server is significant compared to the cost of retrieving the associated object from the server. This last case can happen when the server is a fast static server of cache. In this case, the server will need to be properly tuned to support high enough connection counts because connections will last until the client sends another request. If the client request has to go to another backend or another server due to content switching or the load balancing algorithm, the idle connection will immediately be closed and a new one re-opened. Option "[prefer-last-server](#option%20prefer-last-server)" is available to try optimize server selection so that if the server currently attached to an idle connection is usable, it will be used. At the moment, logs will not indicate whether requests came from the same session or not. The accept date reported in the logs corresponds to the end of the previous request, and the request time corresponds to the time spent waiting for a new request. The keep-alive request time is still bound to the timeout defined by "[timeout http-keep-alive](#timeout%20http-keep-alive)" or "[timeout http-request](#timeout%20http-request)" if not set. This option disables and replaces any previous "[option httpclose](#option%20httpclose)" or "option http-server-close". When backend and frontend options differ, all of these 4 options have precedence over "[option http-keep-alive](#option%20http-keep-alive)". ``` **See also :** "[option httpclose](#option%20httpclose)",, "[option http-server-close](#option%20http-server-close)", "[option prefer-last-server](#option%20prefer-last-server)", "[option http-pretend-keepalive](#option%20http-pretend-keepalive)", and "1.1. The HTTP transaction model". **option http-no-delay** **no option http-no-delay** ``` Instruct the system to favor low interactive delays over performance in HTTP ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` In HTTP, each payload is unidirectional and has no notion of interactivity. Any agent is expected to queue data somewhat for a reasonably low delay. There are some very rare server-to-server applications that abuse the HTTP protocol and expect the payload phase to be highly interactive, with many interleaved data chunks in both directions within a single request. This is absolutely not supported by the HTTP specification and will not work across most proxies or servers. When such applications attempt to do this through HAProxy, it works but they will experience high delays due to the network optimizations which favor performance by instructing the system to wait for enough data to be available in order to only send full packets. Typical delays are around 200 ms per round trip. Note that this only happens with abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not affected. When "[option http-no-delay](#option%20http-no-delay)" is present in either the frontend or the backend used by a connection, all such optimizations will be disabled in order to make the exchanges as fast as possible. Of course this offers no guarantee on the functionality, as it may break at any other place. But if it works via HAProxy, it will work as fast as possible. This option should never be used by default, and should never be used at all unless such a buggy application is discovered. The impact of using this option is an increase of bandwidth usage and CPU usage, which may significantly lower performance in high latency environments. ``` **See also :** "[option http-buffer-request](#option%20http-buffer-request)" **option http-pretend-keepalive** **no option http-pretend-keepalive** ``` Define whether HAProxy will announce keepalive to the server or not ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` When running with "[option http-server-close](#option%20http-server-close)" or "[option httpclose](#option%20httpclose)", HAProxy adds a "Connection: close" header to the request forwarded to the server. Unfortunately, when some servers see this header, they automatically refrain from using the chunked encoding for responses of unknown length, while this is totally unrelated. The immediate effect is that this prevents HAProxy from maintaining the client connection alive. A second effect is that a client or a cache could receive an incomplete response without being aware of it, and consider the response complete. By setting "[option http-pretend-keepalive](#option%20http-pretend-keepalive)", HAProxy will make the server believe it will keep the connection alive. The server will then not fall back to the abnormal undesired above. When HAProxy gets the whole response, it will close the connection with the server just as it would do with the "[option httpclose](#option%20httpclose)". That way the client gets a normal response and the connection is correctly closed on the server side. It is recommended not to enable this option by default, because most servers will more efficiently close the connection themselves after the last packet, and release its buffers slightly earlier. Also, the added packet on the network could slightly reduce the overall peak performance. However it is worth noting that when this option is enabled, HAProxy will have slightly less work to do. So if HAProxy is the bottleneck on the whole architecture, enabling this option might save a few CPU cycles. This option may be set in backend and listen sections. Using it in a frontend section will be ignored and a warning will be reported during startup. It is a backend related option, so there is no real reason to set it on a frontend. This option may be combined with "[option httpclose](#option%20httpclose)", which will cause keepalive to be announced to the server and close to be announced to the client. This practice is discouraged though. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option httpclose](#option%20httpclose)", "[option http-server-close](#option%20http-server-close)", and "[option http-keep-alive](#option%20http-keep-alive)" **option http-restrict-req-hdr-names** { preserve | delete | reject } ``` Set HAProxy policy about HTTP request header names containing characters outside the "[a-zA-Z0-9-]" charset ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` preserve disable the filtering. It is the default mode for HTTP proxies with no FastCGI application configured. delete remove request headers with a name containing a character outside the "[a-zA-Z0-9-]" charset. It is the default mode for HTTP backends with a configured FastCGI application. reject reject the request with a 403-Forbidden response if it contains a header name with a character outside the "[a-zA-Z0-9-]" charset. ``` ``` This option may be used to restrict the request header names to alphanumeric and hyphen characters ([A-Za-z0-9-]). This may be mandatory to interoperate with non-HTTP compliant servers that fail to handle some characters in header names. It may also be mandatory for FastCGI applications because all non-alphanumeric characters in header names are replaced by an underscore ('_'). Thus, it is easily possible to mix up header names and bypass some rules. For instance, "X-Forwarded-For" and "X_Forwarded-For" headers are both converted to "HTTP_X_FORWARDED_FOR" in FastCGI. Note this option is evaluated per proxy and after the http-request rules evaluation. ``` **option http-server-close** **no option http-server-close** ``` Enable or disable HTTP connection closing on the server side ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` By default HAProxy operates in keep-alive mode with regards to persistent connections: for each connection it processes each request and response, and leaves the connection idle on both sides between the end of a response and the start of a new request. This mode may be changed by several options such as "[option http-server-close](#option%20http-server-close)" or "[option httpclose](#option%20httpclose)". Setting "option http-server-close" enables HTTP connection-close mode on the server side while keeping the ability to support HTTP keep-alive and pipelining on the client side. This provides the lowest latency on the client side (slow network) and the fastest session reuse on the server side to save server resources, similarly to "[option httpclose](#option%20httpclose)". It also permits non-keepalive capable servers to be served in keep-alive mode to the clients if they conform to the requirements of RFC7230. Please note that some servers do not always conform to those requirements when they see "Connection: close" in the request. The effect will be that keep-alive will never be used. A workaround consists in enabling "[option http-pretend-keepalive](#option%20http-pretend-keepalive)". At the moment, logs will not indicate whether requests came from the same session or not. The accept date reported in the logs corresponds to the end of the previous request, and the request time corresponds to the time spent waiting for a new request. The keep-alive request time is still bound to the timeout defined by "[timeout http-keep-alive](#timeout%20http-keep-alive)" or "[timeout http-request](#timeout%20http-request)" if not set. This option may be set both in a frontend and in a backend. It is enabled if at least one of the frontend or backend holding a connection has it enabled. It disables and replaces any previous "[option httpclose](#option%20httpclose)" or "option http-keep-alive". Please check [section 4](#4) ("Proxies") to see how this option combines with others when frontend and backend options differ. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option httpclose](#option%20httpclose)", "[option http-pretend-keepalive](#option%20http-pretend-keepalive)", "[option http-keep-alive](#option%20http-keep-alive)", and "1.1. The HTTP transaction model". **option http-use-proxy-header** **no option http-use-proxy-header** ``` Make use of non-standard Proxy-Connection header instead of Connection ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` While RFC7230 explicitly states that HTTP/1.1 agents must use the Connection header to indicate their wish of persistent or non-persistent connections, both browsers and proxies ignore this header for proxied connections and make use of the undocumented, non-standard Proxy-Connection header instead. The issue begins when trying to put a load balancer between browsers and such proxies, because there will be a difference between what HAProxy understands and what the client and the proxy agree on. By setting this option in a frontend, HAProxy can automatically switch to use that non-standard header if it sees proxied requests. A proxied request is defined here as one where the URI begins with neither a '/' nor a '*'. This is incompatible with the HTTP tunnel mode. Note that this option can only be specified in a frontend and will affect the request along its whole life. Also, when this option is set, a request which requires authentication will automatically switch to use proxy authentication headers if it is itself a proxied request. That makes it possible to check or enforce authentication in front of an existing proxy. This option should normally never be used, except in front of a proxy. ``` **See also :** "[option httpclose](#option%20httpclose)", and "[option http-server-close](#option%20http-server-close)". **option httpchk** **option httpchk** <uri> **option httpchk** <method> <uri> **option httpchk** <method> <uri> <version> ``` Enables HTTP protocol to check on the servers health ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <method> is the optional HTTP method used with the requests. When not set, the "OPTIONS" method is used, as it generally requires low server processing and is easy to filter out from the logs. Any method may be used, though it is not recommended to invent non-standard ones. <uri> is the URI referenced in the HTTP requests. It defaults to " / " which is accessible by default on almost any server, but may be changed to any other URI. Query strings are permitted. <version> is the optional HTTP version string. It defaults to "HTTP/1.0" but some servers might behave incorrectly in HTTP 1.0, so turning it to HTTP/1.1 may sometimes help. Note that the Host field is mandatory in HTTP/1.1, use "[http-check send](#http-check%20send)" directive to add it. ``` ``` By default, server health checks only consist in trying to establish a TCP connection. When "[option httpchk](#option%20httpchk)" is specified, a complete HTTP request is sent once the TCP connection is established, and responses 2xx and 3xx are considered valid, while all other ones indicate a server failure, including the lack of any response. Combined with "[http-check](#http-check)" directives, it is possible to customize the request sent during the HTTP health checks or the matching rules on the response. It is also possible to configure a send/expect sequence, just like with the directive "[tcp-check](#option%20tcp-check)" for TCP health checks. The server configuration is used by default to open connections to perform HTTP health checks. By it is also possible to overwrite server parameters using "[http-check connect](#http-check%20connect)" rules. "[httpchk](#option%20httpchk)" option does not necessarily require an HTTP backend, it also works with plain TCP backends. This is particularly useful to check simple scripts bound to some dedicated ports using the inetd daemon. However, it will always internally relies on an HTX multiplexer. Thus, it means the request formatting and the response parsing will be strict. ``` Examples : ``` # Relay HTTPS traffic to Apache instance and check service availability # using HTTP request "OPTIONS \* HTTP/1.1" on port 80. backend https_relay mode tcp option httpchk OPTIONS * HTTP/1.1 http-check send hdr Host www server apache1 192.168.1.1:443 check port 80 ``` **See also :** "[option ssl-hello-chk](#option%20ssl-hello-chk)", "[option smtpchk](#option%20smtpchk)", "[option mysql-check](#option%20mysql-check)", "[option pgsql-check](#option%20pgsql-check)", "[http-check](#http-check)" and the "[check](#check)", "[port](#port)" and "[inter](#inter)" server options. **option httpclose** **no option httpclose** ``` Enable or disable HTTP connection closing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` By default HAProxy operates in keep-alive mode with regards to persistent connections: for each connection it processes each request and response, and leaves the connection idle on both sides between the end of a response and the start of a new request. This mode may be changed by several options such as "[option http-server-close](#option%20http-server-close)" or "[option httpclose](#option%20httpclose)". If "[option httpclose](#option%20httpclose)" is set, HAProxy will close connections with the server and the client as soon as the request and the response are received. It will also check if a "Connection: close" header is already set in each direction, and will add one if missing. Any "Connection" header different from "close" will also be removed. This option may also be combined with "[option http-pretend-keepalive](#option%20http-pretend-keepalive)", which will disable sending of the "Connection: close" header, but will still cause the connection to be closed once the whole response is received. This option may be set both in a frontend and in a backend. It is enabled if at least one of the frontend or backend holding a connection has it enabled. It disables and replaces any previous "[option http-server-close](#option%20http-server-close)" or "option http-keep-alive". Please check [section 4](#4) ("Proxies") to see how this option combines with others when frontend and backend options differ. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option http-server-close](#option%20http-server-close)" and "1.1. The HTTP transaction model". **option httplog** [ clf ] ``` Enable logging of HTTP request, session state and timers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` clf if the "clf" argument is added, then the output format will be the CLF format instead of HAProxy's default HTTP format. You can use this when you need to feed HAProxy's logs through a specific log analyzer which only support the CLF format and which is not extensible. ``` ``` By default, the log output format is very poor, as it only contains the source and destination addresses, and the instance name. By specifying "[option httplog](#option%20httplog)", each log line turns into a much richer format including, but not limited to, the HTTP request, the connection timers, the session status, the connections numbers, the captured headers and cookies, the frontend, backend and server name, and of course the source address and ports. Specifying only "[option httplog](#option%20httplog)" will automatically clear the 'clf' mode if it was set by default. "[option httplog](#option%20httplog)" overrides any previous "[log-format](#log-format)" directive. ``` **See also :** [section 8](#8) about logging. **option httpslog** ``` Enable logging of HTTPS request, session state and timers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | ``` By default, the log output format is very poor, as it only contains the source and destination addresses, and the instance name. By specifying "[option httpslog](#option%20httpslog)", each log line turns into a much richer format including, but not limited to, the HTTP request, the connection timers, the session status, the connections numbers, the captured headers and cookies, the frontend, backend and server name, the SSL certificate verification and SSL handshake statuses, and of course the source address and ports. "[option httpslog](#option%20httpslog)" overrides any previous "[log-format](#log-format)" directive. ``` **See also :** [section 8](#8) about logging. **option independent-streams** **no option independent-streams** ``` Enable or disable independent timeout processing for both directions ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` By default, when data is sent over a socket, both the write timeout and the read timeout for that socket are refreshed, because we consider that there is activity on that socket, and we have no other means of guessing if we should receive data or not. While this default behavior is desirable for almost all applications, there exists a situation where it is desirable to disable it, and only refresh the read timeout if there are incoming data. This happens on sessions with large timeouts and low amounts of exchanged data such as telnet session. If the server suddenly disappears, the output data accumulates in the system's socket buffers, both timeouts are correctly refreshed, and there is no way to know the server does not receive them, so we don't timeout. However, when the underlying protocol always echoes sent data, it would be enough by itself to detect the issue using the read timeout. Note that this problem does not happen with more verbose protocols because data won't accumulate long in the socket buffers. When this option is set on the frontend, it will disable read timeout updates on data sent to the client. There probably is little use of this case. When the option is set on the backend, it will disable read timeout updates on data sent to the server. Doing so will typically break large HTTP posts from slow lines, so use it with caution. ``` **See also :** "timeout client", "timeout server" and "[timeout tunnel](#timeout%20tunnel)" **option ldap-check** ``` Use LDAPv3 health checks for server testing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` It is possible to test that the server correctly talks LDAPv3 instead of just testing that it accepts the TCP connection. When this option is set, an LDAPv3 anonymous simple bind message is sent to the server, and the response is analyzed to find an LDAPv3 bind response message. The server is considered valid only when the LDAP response contains success resultCode (http://tools.ietf.org/html/rfc4511#section-4.1.9). Logging of bind requests is server dependent see your documentation how to configure it. ``` Example : ``` option ldap-check ``` **See also :** "[option httpchk](#option%20httpchk)" **option external-check** ``` Use external processes for server health checks ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | ``` It is possible to test the health of a server using an external command. This is achieved by running the executable set using "external-check command". Requires the "external-check" global to be set. ``` **See also :** "external-check", "[external-check command](#external-check%20command)", "[external-check path](#external-check%20path)" **option idle-close-on-response** **no option idle-close-on-response** ``` Avoid closing idle frontend connections if a soft stop is in progress ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` By default, idle connections will be closed during a soft stop. In some environments, a client talking to the proxy may have prepared some idle connections in order to send requests later. If there is no proper retry on write errors, this can result in errors while haproxy is reloading. Even though a proper implementation should retry on connection/write errors, this option was introduced to support backwards compatibility with haproxy prior to version 2.4. Indeed before v2.4, haproxy used to wait for a last request and response to add a "connection: close" header before closing, thus notifying the client that the connection would not be reusable. In a real life example, this behavior was seen in AWS using the ALB in front of a haproxy. The end result was ALB sending 502 during haproxy reloads. Users are warned that using this option may increase the number of old processes if connections remain idle for too long. Adjusting the client timeouts and/or the "[hard-stop-after](#hard-stop-after)" parameter accordingly might be needed in case of frequent reloads. ``` **See also:** "timeout client", "[timeout client-fin](#timeout%20client-fin)", "[timeout http-request](#timeout%20http-request)", "[hard-stop-after](#hard-stop-after)" **option log-health-checks** **no option log-health-checks** ``` Enable or disable logging of health checks status updates ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` By default, failed health check are logged if server is UP and successful health checks are logged if server is DOWN, so the amount of additional information is limited. When this option is enabled, any change of the health check status or to the server's health will be logged, so that it becomes possible to know that a server was failing occasional checks before crashing, or exactly when it failed to respond a valid HTTP status, then when the port started to reject connections, then when the server stopped responding at all. Note that status changes not caused by health checks (e.g. enable/disable on the CLI) are intentionally not logged by this option. ``` **See also:** "[option httpchk](#option%20httpchk)", "[option ldap-check](#option%20ldap-check)", "[option mysql-check](#option%20mysql-check)", "[option pgsql-check](#option%20pgsql-check)", "[option redis-check](#option%20redis-check)", "[option smtpchk](#option%20smtpchk)", "[option tcp-check](#option%20tcp-check)", "log" and [section 8](#8) about logging. **option log-separate-errors** **no option log-separate-errors** ``` Change log level for non-completely successful connections ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` Sometimes looking for errors in logs is not easy. This option makes HAProxy raise the level of logs containing potentially interesting information such as errors, timeouts, retries, redispatches, or HTTP status codes 5xx. The level changes from "info" to "err". This makes it possible to log them separately to a different file with most syslog daemons. Be careful not to remove them from the original file, otherwise you would lose ordering which provides very important information. Using this option, large sites dealing with several thousand connections per second may log normal traffic to a rotating buffer and only archive smaller error logs. ``` **See also :** "log", "[dontlognull](#option%20dontlognull)", "[dontlog-normal](#option%20dontlog-normal)" and [section 8](#8) about logging. **option logasap** **no option logasap** ``` Enable or disable early logging. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` By default, logs are emitted when all the log format variables and sample fetches used in the definition of the log-format string return a value, or when the session is terminated. This allows the built in log-format strings to account for the transfer time, or the number of bytes in log messages. When handling long lived connections such as large file transfers or RDP, it may take a while for the request or connection to appear in the logs. Using "[option logasap](#option%20logasap)", the log message is created as soon as the server connection is established in mode tcp, or as soon as the server sends the complete headers in mode http. Missing information in the logs will be the total number of bytes which will only indicate the amount of data transferred before the message was created and the total time which will not take the remainder of the connection life or transfer time into account. For the case of HTTP, it is good practice to capture the Content-Length response header so that the logs at least indicate how many bytes are expected to be transferred. ``` Examples : ``` listen http_proxy 0.0.0.0:80 mode http option httplog option logasap log 192.168.2.200 local3 >>> Feb 6 12:14:14 localhost \ haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \ static/srv1 9/10/7/14/+30 200 +243 - - ---- 3/1/1/1/0 1/0 \ "GET /image.iso HTTP/1.0" ``` **See also :** "[option httplog](#option%20httplog)", "[capture response header](#capture%20response%20header)", and [section 8](#8) about logging. **option mysql-check** [ user <username> [ { post-41 | pre-41 } ] ] ``` Use MySQL health checks for server testing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <username> This is the username which will be used when connecting to MySQL server. post-41 Send post v4.1 client compatible checks (the default) pre-41 Send pre v4.1 client compatible checks ``` ``` If you specify a username, the check consists of sending two MySQL packet, one Client Authentication packet, and one QUIT packet, to correctly close MySQL session. We then parse the MySQL Handshake Initialization packet and/or Error packet. It is a basic but useful test which does not produce error nor aborted connect on the server. However, it requires an unlocked authorised user without a password. To create a basic limited user in MySQL with optional resource limits: CREATE USER '<username>'@'<ip_of_haproxy|network_of_haproxy/netmask>' /*!50701 WITH MAX_QUERIES_PER_HOUR 1 MAX_UPDATES_PER_HOUR 0 */ /*M!100201 MAX_STATEMENT_TIME 0.0001 */; If you don't specify a username (it is deprecated and not recommended), the check only consists in parsing the Mysql Handshake Initialization packet or Error packet, we don't send anything in this mode. It was reported that it can generate lockout if check is too frequent and/or if there is not enough traffic. In fact, you need in this case to check MySQL "max_connect_errors" value as if a connection is established successfully within fewer than MySQL "max_connect_errors" attempts after a previous connection was interrupted, the error count for the host is cleared to zero. If HAProxy's server get blocked, the "FLUSH HOSTS" statement is the only way to unblock it. Remember that this does not check database presence nor database consistency. To do this, you can use an external check with xinetd for example. The check requires MySQL >=3.22, for older version, please use TCP check. Most often, an incoming MySQL server needs to see the client's IP address for various purposes, including IP privilege matching and connection logging. When possible, it is often wise to masquerade the client's IP address when connecting to the server using the "usesrc" argument of the "source" keyword, which requires the transparent proxy feature to be compiled in, and the MySQL server to route the client via the machine hosting HAProxy. ``` **See also:** "[option httpchk](#option%20httpchk)" **option nolinger** **no option nolinger** ``` Enable or disable immediate session resource cleaning after close ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` When clients or servers abort connections in a dirty way (e.g. they are physically disconnected), the session timeouts triggers and the session is closed. But it will remain in FIN_WAIT1 state for some time in the system, using some resources and possibly limiting the ability to establish newer connections. When this happens, it is possible to activate "[option nolinger](#option%20nolinger)" which forces the system to immediately remove any socket's pending data on close. Thus, a TCP RST is emitted, any pending data are truncated, and the session is instantly purged from the system's tables. The generally visible effect for a client is that responses are truncated if the close happens with a last block of data (e.g. on a redirect or error response). On the server side, it may help release the source ports immediately when forwarding a client aborts in tunnels. In both cases, TCP resets are emitted and given that the session is instantly destroyed, there will be no retransmit. On a lossy network this can increase problems, especially when there is a firewall on the lossy side, because the firewall might see and process the reset (hence purge its session) and block any further traffic for this session,, including retransmits from the other side. So if the other side doesn't receive it, it will never receive any RST again, and the firewall might log many blocked packets. For all these reasons, it is strongly recommended NOT to use this option, unless absolutely needed as a last resort. In most situations, using the "client-fin" or "server-fin" timeouts achieves similar results with a more reliable behavior. On Linux it's also possible to use the "tcp-ut" bind or server setting. This option may be used both on frontends and backends, depending on the side where it is required. Use it on the frontend for clients, and on the backend for servers. While this option is technically supported in "defaults" sections, it must really not be used there as it risks to accidentally propagate to sections that must no use it and to cause problems there. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also:** "[timeout client-fin](#timeout%20client-fin)", "[timeout server-fin](#timeout%20server-fin)", "tcp-ut" bind or server keywords. **option originalto** [ except <network> ] [ header <name> ] ``` Enable insertion of the X-Original-To header to requests sent to servers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <network> is an optional argument used to disable this option for sources matching <network> <name> an optional argument to specify a different "X-Original-To" header name. ``` ``` Since HAProxy can work in transparent mode, every request from a client can be redirected to the proxy and HAProxy itself can proxy every request to a complex SQUID environment and the destination host from SO_ORIGINAL_DST will be lost. This is annoying when you want access rules based on destination ip addresses. To solve this problem, a new HTTP header "X-Original-To" may be added by HAProxy to all requests sent to the server. This header contains a value representing the original destination IP address. Since this must be configured to always use the last occurrence of this header only. Note that only the last occurrence of the header must be used, since it is really possible that the client has already brought one. The keyword "header" may be used to supply a different header name to replace the default "X-Original-To". This can be useful where you might already have a "X-Original-To" header from a different application, and you need preserve it. Also if your backend server doesn't use the "X-Original-To" header and requires different one. Sometimes, a same HAProxy instance may be shared between a direct client access and a reverse-proxy access (for instance when an SSL reverse-proxy is used to decrypt HTTPS traffic). It is possible to disable the addition of the header for a known destination address or network by adding the "except" keyword followed by the network address. In this case, any destination IP matching the network will not cause an addition of this header. Most common uses are with private networks or 127.0.0.1. IPv4 and IPv6 are both supported. This option may be specified either in the frontend or in the backend. If at least one of them uses it, the header will be added. Note that the backend's setting of the header subargument takes precedence over the frontend's if both are defined. ``` Examples : ``` # Original Destination address frontend www mode http option originalto except 127.0.0.1 # Those servers want the IP Address in X-Client-Dst backend www mode http option originalto header X-Client-Dst ``` **See also :** "[option httpclose](#option%20httpclose)", "[option http-server-close](#option%20http-server-close)". **option persist** **no option persist** ``` Enable or disable forced persistence on down servers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` When an HTTP request reaches a backend with a cookie which references a dead server, by default it is redispatched to another server. It is possible to force the request to be sent to the dead server first using "[option persist](#option%20persist)" if absolutely needed. A common use case is when servers are under extreme load and spend their time flapping. In this case, the users would still be directed to the server they opened the session on, in the hope they would be correctly served. It is recommended to use "[option redispatch](#option%20redispatch)" in conjunction with this option so that in the event it would not be possible to connect to the server at all (server definitely dead), the client would finally be redirected to another valid server. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option redispatch](#option%20redispatch)", "[retries](#retries)", "[force-persist](#force-persist)" **option pgsql-check user** <username> ``` Use PostgreSQL health checks for server testing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <username> This is the username which will be used when connecting to PostgreSQL server. ``` ``` The check sends a PostgreSQL StartupMessage and waits for either Authentication request or ErrorResponse message. It is a basic but useful test which does not produce error nor aborted connect on the server. This check is identical with the "[mysql-check](#option%20mysql-check)". ``` **See also:** "[option httpchk](#option%20httpchk)" **option prefer-last-server** **no option prefer-last-server** ``` Allow multiple load balanced requests to remain on the same server ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` When the load balancing algorithm in use is not deterministic, and a previous request was sent to a server to which HAProxy still holds a connection, it is sometimes desirable that subsequent requests on a same session go to the same server as much as possible. Note that this is different from persistence, as we only indicate a preference which HAProxy tries to apply without any form of warranty. The real use is for keep-alive connections sent to servers. When this option is used, HAProxy will try to reuse the same connection that is attached to the server instead of rebalancing to another server, causing a close of the connection. This can make sense for static file servers. It does not make much sense to use this in combination with hashing algorithms. Note, HAProxy already automatically tries to stick to a server which sends a 401 or to a proxy which sends a 407 (authentication required), when the load balancing algorithm is not deterministic. This is mandatory for use with the broken NTLM authentication challenge, and significantly helps in troubleshooting some faulty applications. Option prefer-last-server might be desirable in these environments as well, to avoid redistributing the traffic after every other response. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also:** "[option http-keep-alive](#option%20http-keep-alive)" **option redispatch** **option redispatch** <interval> **no option redispatch** ``` Enable or disable session redistribution in case of connection failure ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <interval> The optional integer value that controls how often redispatches occur when retrying connections. Positive value P indicates a redispatch is desired on every Pth retry, and negative value N indicate a redispatch is desired on the Nth retry prior to the last retry. For example, the default of -1 preserves the historical behavior of redispatching on the last retry, a positive value of 1 would indicate a redispatch on every retry, and a positive value of 3 would indicate a redispatch on every third retry. You can disable redispatches with a value of 0. ``` ``` In HTTP mode, if a server designated by a cookie is down, clients may definitely stick to it because they cannot flush the cookie, so they will not be able to access the service anymore. Specifying "[option redispatch](#option%20redispatch)" will allow the proxy to break cookie or consistent hash based persistence and redistribute them to a working server. Active servers are selected from a subset of the list of available servers. Active servers that are not down or in maintenance (i.e., whose health is not checked or that have been checked as "up"), are selected in the following order: 1. Any active, non-backup server, if any, or, 2. If the "[allbackups](#option%20allbackups)" option is not set, the first backup server in the list, or 3. If the "[allbackups](#option%20allbackups)" option is set, any backup server. When a retry occurs, HAProxy tries to select another server than the last one. The new server is selected from the current list of servers. Sometimes, if the list is updated between retries (e.g., if numerous retries occur and last longer than the time needed to check that a server is down, remove it from the list and fall back on the list of backup servers), connections may be redirected to a backup server, though. It also allows to retry connections to another server in case of multiple connection failures. Of course, it requires having "[retries](#retries)" set to a nonzero value. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[retries](#retries)", "[force-persist](#force-persist)" **option redis-check** ``` Use redis health checks for server testing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` It is possible to test that the server correctly talks REDIS protocol instead of just testing that it accepts the TCP connection. When this option is set, a PING redis command is sent to the server, and the response is analyzed to find the "+PONG" response message. ``` Example : ``` option redis-check ``` **See also :** "[option httpchk](#option%20httpchk)", "[option tcp-check](#option%20tcp-check)", "[tcp-check expect](#tcp-check%20expect)" **option smtpchk** **option smtpchk** <hello> <domain> ``` Use SMTP health checks for server testing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <hello> is an optional argument. It is the "hello" command to use. It can be either "HELO" (for SMTP) or "EHLO" (for ESMTP). All other values will be turned into the default command ("HELO"). <domain> is the domain name to present to the server. It may only be specified (and is mandatory) if the hello command has been specified. By default, "localhost" is used. ``` ``` When "[option smtpchk](#option%20smtpchk)" is set, the health checks will consist in TCP connections followed by an SMTP command. By default, this command is "HELO localhost". The server's return code is analyzed and only return codes starting with a "2" will be considered as valid. All other responses, including a lack of response will constitute an error and will indicate a dead server. This test is meant to be used with SMTP servers or relays. Depending on the request, it is possible that some servers do not log each connection attempt, so you may want to experiment to improve the behavior. Using telnet on port 25 is often easier than adjusting the configuration. Most often, an incoming SMTP server needs to see the client's IP address for various purposes, including spam filtering, anti-spoofing and logging. When possible, it is often wise to masquerade the client's IP address when connecting to the server using the "usesrc" argument of the "source" keyword, which requires the transparent proxy feature to be compiled in. ``` Example : ``` option smtpchk HELO mydomain.org ``` **See also :** "[option httpchk](#option%20httpchk)", "source" **option socket-stats** **no option socket-stats** ``` Enable or disable collecting & providing separate statistics for each socket. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none **option splice-auto** **no option splice-auto** ``` Enable or disable automatic kernel acceleration on sockets in both directions ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` When this option is enabled either on a frontend or on a backend, HAProxy will automatically evaluate the opportunity to use kernel tcp splicing to forward data between the client and the server, in either direction. HAProxy uses heuristics to estimate if kernel splicing might improve performance or not. Both directions are handled independently. Note that the heuristics used are not much aggressive in order to limit excessive use of splicing. This option requires splicing to be enabled at compile time, and may be globally disabled with the global option "[nosplice](#nosplice)". Since splice uses pipes, using it requires that there are enough spare pipes. Important note: kernel-based TCP splicing is a Linux-specific feature which first appeared in kernel 2.6.25. It offers kernel-based acceleration to transfer data between sockets without copying these data to user-space, thus providing noticeable performance gains and CPU cycles savings. Since many early implementations are buggy, corrupt data and/or are inefficient, this feature is not enabled by default, and it should be used with extreme care. While it is not possible to detect the correctness of an implementation, 2.6.29 is the first version offering a properly working implementation. In case of doubt, splicing may be globally disabled using the global "[nosplice](#nosplice)" keyword. ``` Example : ``` option splice-auto ``` ``` If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option splice-request](#option%20splice-request)", "[option splice-response](#option%20splice-response)", and global options "[nosplice](#nosplice)" and "[maxpipes](#maxpipes)" **option splice-request** **no option splice-request** ``` Enable or disable automatic kernel acceleration on sockets for requests ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` When this option is enabled either on a frontend or on a backend, HAProxy will use kernel tcp splicing whenever possible to forward data going from the client to the server. It might still use the recv/send scheme if there are no spare pipes left. This option requires splicing to be enabled at compile time, and may be globally disabled with the global option "[nosplice](#nosplice)". Since splice uses pipes, using it requires that there are enough spare pipes. Important note: see "[option splice-auto](#option%20splice-auto)" for usage limitations. ``` Example : ``` option splice-request ``` ``` If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option splice-auto](#option%20splice-auto)", "[option splice-response](#option%20splice-response)", and global options "[nosplice](#nosplice)" and "[maxpipes](#maxpipes)" **option splice-response** **no option splice-response** ``` Enable or disable automatic kernel acceleration on sockets for responses ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` When this option is enabled either on a frontend or on a backend, HAProxy will use kernel tcp splicing whenever possible to forward data going from the server to the client. It might still use the recv/send scheme if there are no spare pipes left. This option requires splicing to be enabled at compile time, and may be globally disabled with the global option "[nosplice](#nosplice)". Since splice uses pipes, using it requires that there are enough spare pipes. Important note: see "[option splice-auto](#option%20splice-auto)" for usage limitations. ``` Example : ``` option splice-response ``` ``` If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option splice-auto](#option%20splice-auto)", "[option splice-request](#option%20splice-request)", and global options "[nosplice](#nosplice)" and "[maxpipes](#maxpipes)" **option spop-check** ``` Use SPOP health checks for server testing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | no | yes | Arguments : none ``` It is possible to test that the server correctly talks SPOP protocol instead of just testing that it accepts the TCP connection. When this option is set, a HELLO handshake is performed between HAProxy and the server, and the response is analyzed to check no error is reported. ``` Example : ``` option spop-check ``` **See also :** "[option httpchk](#option%20httpchk)" **option srvtcpka** **no option srvtcpka** ``` Enable or disable the sending of TCP keepalive packets on the server side ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` When there is a firewall or any session-aware component between a client and a server, and when the protocol involves very long sessions with long idle periods (e.g. remote desktops), there is a risk that one of the intermediate components decides to expire a session which has remained idle for too long. Enabling socket-level TCP keep-alives makes the system regularly send packets to the other end of the connection, leaving it active. The delay between keep-alive probes is controlled by the system only and depends both on the operating system and its tuning parameters. It is important to understand that keep-alive packets are neither emitted nor received at the application level. It is only the network stacks which sees them. For this reason, even if one side of the proxy already uses keep-alives to maintain its connection alive, those keep-alive packets will not be forwarded to the other side of the proxy. Please note that this has nothing to do with HTTP keep-alive. Using option "[srvtcpka](#option%20srvtcpka)" enables the emission of TCP keep-alive probes on the server side of a connection, which should help when session expirations are noticed between HAProxy and a server. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option clitcpka](#option%20clitcpka)", "[option tcpka](#option%20tcpka)" **option ssl-hello-chk** ``` Use SSLv3 client hello health checks for server testing ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` When some SSL-based protocols are relayed in TCP mode through HAProxy, it is possible to test that the server correctly talks SSL instead of just testing that it accepts the TCP connection. When "[option ssl-hello-chk](#option%20ssl-hello-chk)" is set, pure SSLv3 client hello messages are sent once the connection is established to the server, and the response is analyzed to find an SSL server hello message. The server is considered valid only when the response contains this server hello message. All servers tested till there correctly reply to SSLv3 client hello messages, and most servers tested do not even log the requests containing only hello messages, which is appreciable. Note that this check works even when SSL support was not built into HAProxy because it forges the SSL message. When SSL support is available, it is best to use native SSL health checks instead of this one. ``` **See also:** "[option httpchk](#option%20httpchk)", "[check-ssl](#check-ssl)" **option tcp-check** ``` Perform health checks using tcp-check send/expect sequences ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | ``` This health check method is intended to be combined with "[tcp-check](#option%20tcp-check)" command lists in order to support send/expect types of health check sequences. TCP checks currently support 4 modes of operations : - no "[tcp-check](#option%20tcp-check)" directive : the health check only consists in a connection attempt, which remains the default mode. - "[tcp-check send](#tcp-check%20send)" or "[tcp-check send-binary](#tcp-check%20send-binary)" only is mentioned : this is used to send a string along with a connection opening. With some protocols, it helps sending a "QUIT" message for example that prevents the server from logging a connection error for each health check. The check result will still be based on the ability to open the connection only. - "[tcp-check expect](#tcp-check%20expect)" only is mentioned : this is used to test a banner. The connection is opened and HAProxy waits for the server to present some contents which must validate some rules. The check result will be based on the matching between the contents and the rules. This is suited for POP, IMAP, SMTP, FTP, SSH, TELNET. - both "[tcp-check send](#tcp-check%20send)" and "[tcp-check expect](#tcp-check%20expect)" are mentioned : this is used to test a hello-type protocol. HAProxy sends a message, the server responds and its response is analyzed. the check result will be based on the matching between the response contents and the rules. This is often suited for protocols which require a binding or a request/response model. LDAP, MySQL, Redis and SSL are example of such protocols, though they already all have their dedicated checks with a deeper understanding of the respective protocols. In this mode, many questions may be sent and many answers may be analyzed. A fifth mode can be used to insert comments in different steps of the script. For each tcp-check rule you create, you can add a "comment" directive, followed by a string. This string will be reported in the log and stderr in debug mode. It is useful to make user-friendly error reporting. The "comment" is of course optional. During the execution of a health check, a variable scope is made available to store data samples, using the "tcp-check set-var" operation. Freeing those variable is possible using "[tcp-check unset-var](#tcp-check%20unset-var)". ``` Examples : ``` # perform a POP check (analyze only server's banner) option tcp-check tcp-check expect string +OK\ POP3\ ready comment POP\ protocol # perform an IMAP check (analyze only server's banner) option tcp-check tcp-check expect string *\ OK\ IMAP4\ ready comment IMAP\ protocol # look for the redis master server after ensuring it speaks well # redis protocol, then it exits properly. # (send a command then analyze the response 3 times) option tcp-check tcp-check comment PING\ phase tcp-check send PING\r\n tcp-check expect string +PONG tcp-check comment role\ check tcp-check send info\ replication\r\n tcp-check expect string role:master tcp-check comment QUIT\ phase tcp-check send QUIT\r\n tcp-check expect string +OK forge a HTTP request, then analyze the response (send many headers before analyzing) option tcp-check tcp-check comment forge\ and\ send\ HTTP\ request tcp-check send HEAD\ /\ HTTP/1.1\r\n tcp-check send Host:\ www.mydomain.com\r\n tcp-check send User-Agent:\ HAProxy\ tcpcheck\r\n tcp-check send \r\n tcp-check expect rstring HTTP/1\..\ (2..|3..) comment check\ HTTP\ response ``` **See also :** "[tcp-check connect](#tcp-check%20connect)", "[tcp-check expect](#tcp-check%20expect)" and "[tcp-check send](#tcp-check%20send)". **option tcp-smart-accept** **no option tcp-smart-accept** ``` Enable or disable the saving of one ACK packet during the accept sequence ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` When an HTTP connection request comes in, the system acknowledges it on behalf of HAProxy, then the client immediately sends its request, and the system acknowledges it too while it is notifying HAProxy about the new connection. HAProxy then reads the request and responds. This means that we have one TCP ACK sent by the system for nothing, because the request could very well be acknowledged by HAProxy when it sends its response. For this reason, in HTTP mode, HAProxy automatically asks the system to avoid sending this useless ACK on platforms which support it (currently at least Linux). It must not cause any problem, because the system will send it anyway after 40 ms if the response takes more time than expected to come. During complex network debugging sessions, it may be desirable to disable this optimization because delayed ACKs can make troubleshooting more complex when trying to identify where packets are delayed. It is then possible to fall back to normal behavior by specifying "[no option tcp-smart-accept](#no%20option%20tcp-smart-accept)". It is also possible to force it for non-HTTP proxies by simply specifying "[option tcp-smart-accept](#option%20tcp-smart-accept)". For instance, it can make sense with some services such as SMTP where the server speaks first. It is recommended to avoid forcing this option in a defaults section. In case of doubt, consider setting it back to automatic values by prepending the "default" keyword before it, or disabling it using the "no" keyword. ``` **See also :** "[option tcp-smart-connect](#option%20tcp-smart-connect)" **option tcp-smart-connect** **no option tcp-smart-connect** ``` Enable or disable the saving of one ACK packet during the connect sequence ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` On certain systems (at least Linux), HAProxy can ask the kernel not to immediately send an empty ACK upon a connection request, but to directly send the buffer request instead. This saves one packet on the network and thus boosts performance. It can also be useful for some servers, because they immediately get the request along with the incoming connection. This feature is enabled when "[option tcp-smart-connect](#option%20tcp-smart-connect)" is set in a backend. It is not enabled by default because it makes network troubleshooting more complex. It only makes sense to enable it with protocols where the client speaks first such as HTTP. In other situations, if there is no data to send in place of the ACK, a normal ACK is sent. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. ``` **See also :** "[option tcp-smart-accept](#option%20tcp-smart-accept)" **option tcpka** ``` Enable or disable the sending of TCP keepalive packets on both sides ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` When there is a firewall or any session-aware component between a client and a server, and when the protocol involves very long sessions with long idle periods (e.g. remote desktops), there is a risk that one of the intermediate components decides to expire a session which has remained idle for too long. Enabling socket-level TCP keep-alives makes the system regularly send packets to the other end of the connection, leaving it active. The delay between keep-alive probes is controlled by the system only and depends both on the operating system and its tuning parameters. It is important to understand that keep-alive packets are neither emitted nor received at the application level. It is only the network stacks which sees them. For this reason, even if one side of the proxy already uses keep-alives to maintain its connection alive, those keep-alive packets will not be forwarded to the other side of the proxy. Please note that this has nothing to do with HTTP keep-alive. Using option "[tcpka](#option%20tcpka)" enables the emission of TCP keep-alive probes on both the client and server sides of a connection. Note that this is meaningful only in "defaults" or "listen" sections. If this option is used in a frontend, only the client side will get keep-alives, and if this option is used in a backend, only the server side will get keep-alives. For this reason, it is strongly recommended to explicitly use "[option clitcpka](#option%20clitcpka)" and "[option srvtcpka](#option%20srvtcpka)" when the configuration is split between frontends and backends. ``` **See also :** "[option clitcpka](#option%20clitcpka)", "[option srvtcpka](#option%20srvtcpka)" **option tcplog** ``` Enable advanced logging of TCP connections with session state and timers ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : none ``` By default, the log output format is very poor, as it only contains the source and destination addresses, and the instance name. By specifying "[option tcplog](#option%20tcplog)", each log line turns into a much richer format including, but not limited to, the connection timers, the session status, the connections numbers, the frontend, backend and server name, and of course the source address and ports. This option is useful for pure TCP proxies in order to find which of the client or server disconnects or times out. For normal HTTP proxies, it's better to use "[option httplog](#option%20httplog)" which is even more complete. "[option tcplog](#option%20tcplog)" overrides any previous "[log-format](#log-format)" directive. ``` **See also :** "[option httplog](#option%20httplog)", and [section 8](#8) about logging. **option transparent** **no option transparent** ``` Enable client-side transparent proxying ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` This option was introduced in order to provide layer 7 persistence to layer 3 load balancers. The idea is to use the OS's ability to redirect an incoming connection for a remote address to a local process (here HAProxy), and let this process know what address was initially requested. When this option is used, sessions without cookies will be forwarded to the original destination IP address of the incoming request (which should match that of another equipment), while requests with cookies will still be forwarded to the appropriate server. Note that contrary to a common belief, this option does NOT make HAProxy present the client's IP to the server when establishing the connection. ``` **See also:** the "usesrc" argument of the "source" keyword, and the "[transparent](#option%20transparent)" option of the "bind" keyword. **external-check command** <command> ``` Executable to run when performing an external-check ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <command> is the external command to run ``` ``` The arguments passed to the to the command are: <proxy_address> <proxy_port> <server_address> <server_port> The <proxy_address> and <proxy_port> are derived from the first listener that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket listener the proxy_address will be the path of the socket and the <proxy_port> will be the string "NOT_USED". In a backend section, it's not possible to determine a listener, and both <proxy_address> and <proxy_port> will have the string value "NOT_USED". Some values are also provided through environment variables. Environment variables : HAPROXY_PROXY_ADDR The first bind address if available (or empty if not applicable, for example in a "backend" section). HAPROXY_PROXY_ID The backend id. HAPROXY_PROXY_NAME The backend name. HAPROXY_PROXY_PORT The first bind port if available (or empty if not applicable, for example in a "backend" section or for a UNIX socket). HAPROXY_SERVER_ADDR The server address. HAPROXY_SERVER_CURCONN The current number of connections on the server. HAPROXY_SERVER_ID The server id. HAPROXY_SERVER_MAXCONN The server max connections. HAPROXY_SERVER_NAME The server name. HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX socket). HAPROXY_SERVER_SSL "0" when SSL is not used, "1" when it is used HAPROXY_SERVER_PROTO The protocol used by this server, which can be one of "cli" (the haproxy CLI), "syslog" (syslog TCP server), "[peers](#peers)" (peers TCP server), "h1" (HTTP/1.x server), "h2" (HTTP/2 server), or "tcp" (any other TCP server). PATH The PATH environment variable used when executing the command may be set using "[external-check path](#external-check%20path)". See also "2.3. Environment variables" for other variables. If the command executed and exits with a zero status then the check is considered to have passed, otherwise the check is considered to have failed. ``` Example : ``` external-check command /bin/true ``` **See also :** "external-check", "[option external-check](#option%20external-check)", "[external-check path](#external-check%20path)" **external-check path** <path> ``` The value of the PATH environment variable used when running an external-check ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <path> is the path used when executing external command to run ``` ``` The default path is "". ``` Example : ``` external-check path "/usr/bin:/bin" ``` **See also :** "external-check", "[option external-check](#option%20external-check)", "[external-check command](#external-check%20command)" **persist rdp-cookie** **persist rdp-cookie**(<name>) ``` Enable RDP cookie-based persistence ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <name> is the optional name of the RDP cookie to check. If omitted, the default cookie name "msts" will be used. There currently is no valid reason to change this name. ``` ``` This statement enables persistence based on an RDP cookie. The RDP cookie contains all information required to find the server in the list of known servers. So when this option is set in the backend, the request is analyzed and if an RDP cookie is found, it is decoded. If it matches a known server which is still UP (or if "[option persist](#option%20persist)" is set), then the connection is forwarded to this server. Note that this only makes sense in a TCP backend, but for this to work, the frontend must have waited long enough to ensure that an RDP cookie is present in the request buffer. This is the same requirement as with the "rdp-cookie" load-balancing method. Thus it is highly recommended to put all statements in a single "listen" section. Also, it is important to understand that the terminal server will emit this RDP cookie only if it is configured for "token redirection mode", which means that the "IP address redirection" option is disabled. ``` Example : ``` listen tse-farm bind :3389 # wait up to 5s for an RDP cookie in the request tcp-request inspect-delay 5s tcp-request content accept if RDP_COOKIE # apply RDP cookie persistence persist rdp-cookie # if server is unknown, let's balance on the same cookie. # alternatively, "balance leastconn" may be useful too. balance rdp-cookie server srv1 1.1.1.1:3389 server srv2 1.1.1.2:3389 ``` **See also :** "balance rdp-cookie", "[tcp-request](#tcp-request)" and the "[req.rdp\_cookie](#req.rdp_cookie)" ACL. **rate-limit sessions** <rate> ``` Set a limit on the number of new sessions accepted per second on a frontend ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <rate> The <rate> parameter is an integer designating the maximum number of new sessions per second to accept on the frontend. ``` ``` When the frontend reaches the specified number of new sessions per second, it stops accepting new connections until the rate drops below the limit again. During this time, the pending sessions will be kept in the socket's backlog (in system buffers) and HAProxy will not even be aware that sessions are pending. When applying very low limit on a highly loaded service, it may make sense to increase the socket's backlog using the "backlog" keyword. This feature is particularly efficient at blocking connection-based attacks or service abuse on fragile servers. Since the session rate is measured every millisecond, it is extremely accurate. Also, the limit applies immediately, no delay is needed at all to detect the threshold. ``` Example : ``` Limit the connection rate on SMTP to 10 per second maxlisten smtp mode tcp bind :25 rate-limit sessions 10 server smtp1 127.0.0.1:1025 ``` ``` Note : when the maximum rate is reached, the frontend's status is not changed but its sockets appear as "WAITING" in the statistics if the "[socket-stats](#option%20socket-stats)" option is enabled. ``` **See also :** the "backlog" keyword and the "[fe\_sess\_rate](#fe_sess_rate)" ACL criterion. **redirect location** <loc> [code <code>] <option> [{if | unless} <condition>] **redirect prefix** <pfx> [code <code>] <option> [{if | unless} <condition>] **redirect scheme** <sch> [code <code>] <option> [{if | unless} <condition>] ``` Return an HTTP redirection if/unless a condition is matched ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | yes | ``` If/unless the condition is matched, the HTTP request will lead to a redirect response. If no condition is specified, the redirect applies unconditionally. ``` Arguments : ``` <loc> With "[redirect location](#redirect%20location)", the exact value in <loc> is placed into the HTTP "Location" header. When used in an "http-request" rule, <loc> value follows the log-format rules and can include some dynamic values (see Custom Log Format in [section 8.2.4](#8.2.4)). <pfx> With "[redirect prefix](#redirect%20prefix)", the "Location" header is built from the concatenation of <pfx> and the complete URI path, including the query string, unless the "drop-query" option is specified (see below). As a special case, if <pfx> equals exactly "/", then nothing is inserted before the original URI. It allows one to redirect to the same URL (for instance, to insert a cookie). When used in an "http-request" rule, <pfx> value follows the log-format rules and can include some dynamic values (see Custom Log Format in [section 8.2.4](#8.2.4)). <sch> With "[redirect scheme](#redirect%20scheme)", then the "Location" header is built by concatenating <sch> with "://" then the first occurrence of the "Host" header, and then the URI path, including the query string unless the "drop-query" option is specified (see below). If no path is found or if the path is "*", then "/" is used instead. If no "Host" header is found, then an empty host component will be returned, which most recent browsers interpret as redirecting to the same host. This directive is mostly used to redirect HTTP to HTTPS. When used in an "http-request" rule, <sch> value follows the log-format rules and can include some dynamic values (see Custom Log Format in [section 8.2.4](#8.2.4)). <code> The code is optional. It indicates which type of HTTP redirection is desired. Only codes 301, 302, 303, 307 and 308 are supported, with 302 used by default if no code is specified. 301 means "Moved permanently", and a browser may cache the Location. 302 means "Moved temporarily" and means that the browser should not cache the redirection. 303 is equivalent to 302 except that the browser will fetch the location with a GET method. 307 is just like 302 but makes it clear that the same method must be reused. Likewise, 308 replaces 301 if the same method must be used. <option> There are several options which can be specified to adjust the expected behavior of a redirection : - "drop-query" When this keyword is used in a prefix-based redirection, then the location will be set without any possible query-string, which is useful for directing users to a non-secure page for instance. It has no effect with a location-type redirect. - "append-slash" This keyword may be used in conjunction with "drop-query" to redirect users who use a URL not ending with a '/' to the same one with the '/'. It can be useful to ensure that search engines will only see one URL. For this, a return code 301 is preferred. - "ignore-empty" This keyword only has effect when a location is produced using a log format expression (i.e. when used in http-request or http-response). It indicates that if the result of the expression is empty, the rule should silently be skipped. The main use is to allow mass-redirects of known paths using a simple map. - "set-cookie NAME[=value]" A "Set-Cookie" header will be added with NAME (and optionally "=value") to the response. This is sometimes used to indicate that a user has been seen, for instance to protect against some types of DoS. No other cookie option is added, so the cookie will be a session cookie. Note that for a browser, a sole cookie name without an equal sign is different from a cookie with an equal sign. - "clear-cookie NAME[=]" A "Set-Cookie" header will be added with NAME (and optionally "="), but with the "Max-Age" attribute set to zero. This will tell the browser to delete this cookie. It is useful for instance on logout pages. It is important to note that clearing the cookie "NAME" will not remove a cookie set with "NAME=value". You have to clear the cookie "NAME=" for that, because the browser makes the difference. ``` Example: ``` Move the login URL only to HTTPS.acl clear dst_port 80 acl secure dst_port 8080 acl login_page url_beg /login acl logout url_beg /logout acl uid_given url_reg /login?userid=[^&]+ acl cookie_set hdr_sub(cookie) SEEN=1 redirect prefix https://mysite.com set-cookie SEEN=1 if !cookie_set redirect prefix https://mysite.com if login_page !secure redirect prefix http://mysite.com drop-query if login_page !uid_given redirect location http://mysite.com/ if !login_page secure redirect location / clear-cookie USERID= if logout ``` Example: ``` Send redirects for request for articles without a '/'.acl missing_slash path_reg ^/article/[^/]*$ redirect code 301 prefix / drop-query append-slash if missing_slash ``` Example: ``` Redirect all HTTP traffic to HTTPS when SSL is handled by HAProxy.redirect scheme https if !{ ssl_fc } ``` Example: ``` Append 'www.' prefix in front of all hosts not having ithttp-request redirect code 301 location \ http://www.%[hdr(host)]%[capture.req.uri] \ unless { hdr_beg(host) -i www } ``` Example: ``` Permanently redirect only old URLs to new oneshttp-request redirect code 301 location \ %[path,map_str(old-blog-articles.map)] ignore-empty ``` ``` See [section 7](#7) about ACL usage. ``` **retries** <value> ``` Set the number of retries to perform on a server after a failure ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <value> is the number of times a request or connection attempt should be retried on a server after a failure. ``` ``` By default, retries apply only to new connection attempts. However, when the "[retry-on](#retry-on)" directive is used, other conditions might trigger a retry (e.g. empty response, undesired status code), and each of them will count one attempt, and when the total number attempts reaches the value here, an error will be returned. In order to avoid immediate reconnections to a server which is restarting, a turn-around timer of min("timeout connect", one second) is applied before a retry occurs on the same server. When "[option redispatch](#option%20redispatch)" is set, some retries may be performed on another server even if a cookie references a different server. By default this will only be the last retry unless an argument is passed to "[option redispatch](#option%20redispatch)". ``` **See also :** "[option redispatch](#option%20redispatch)" **retry-on** [space-delimited list of keywords] ``` Specify when to attempt to automatically retry a failed request. This setting is only valid when "mode" is set to http and is silently ignored otherwise. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <keywords> is a space-delimited list of keywords or HTTP status codes, each representing a type of failure event on which an attempt to retry the request is desired. Please read the notes at the bottom before changing this setting. The following keywords are supported : none never retry conn-failure retry when the connection or the SSL handshake failed and the request could not be sent. This is the default. empty-response retry when the server connection was closed after part of the request was sent, and nothing was received from the server. This type of failure may be caused by the request timeout on the server side, poor network condition, or a server crash or restart while processing the request. junk-response retry when the server returned something not looking like a complete HTTP response. This includes partial responses headers as well as non-HTTP contents. It usually is a bad idea to retry on such events, which may be caused a configuration issue (wrong server port) or by the request being harmful to the server (buffer overflow attack for example). response-timeout the server timeout stroke while waiting for the server to respond to the request. This may be caused by poor network condition, the reuse of an idle connection which has expired on the path, or by the request being extremely expensive to process. It generally is a bad idea to retry on such events on servers dealing with heavy database processing (full scans, etc) as it may amplify denial of service attacks. 0rtt-rejected retry requests which were sent over early data and were rejected by the server. These requests are generally considered to be safe to retry. <status> any HTTP status code among "401" (Unauthorized), "403" (Forbidden), "404" (Not Found), "408" (Request Timeout), "425" (Too Early), "500" (Server Error), "501" (Not Implemented), "502" (Bad Gateway), "503" (Service Unavailable), "504" (Gateway Timeout). all-retryable-errors retry request for any error that are considered retryable. This currently activates "conn-failure", "empty-response", "junk-response", "response-timeout", "0rtt-rejected", "500", "502", "503", and "504". ``` ``` Using this directive replaces any previous settings with the new ones; it is not cumulative. Please note that using anything other than "none" and "conn-failure" requires to allocate a buffer and copy the whole request into it, so it has memory and performance impacts. Requests not fitting in a single buffer will never be retried (see the global tune.bufsize setting). You have to make sure the application has a replay protection mechanism built in such as a unique transaction IDs passed in requests, or that replaying the same request has no consequence, or it is very dangerous to use any retry-on value beside "conn-failure" and "none". Static file servers and caches are generally considered safe against any type of retry. Using a status code can be useful to quickly leave a server showing an abnormal behavior (out of memory, file system issues, etc), but in this case it may be a good idea to immediately redispatch the connection to another server (please see "option redispatch" for this). Last, it is important to understand that most causes of failures are the requests themselves and that retrying a request causing a server to misbehave will often make the situation even worse for this server, or for the whole service in case of redispatch. Unless you know exactly how the application deals with replayed requests, you should not use this directive. The default is "conn-failure". ``` Example: ``` retry-on 503 504 ``` **See also:** "[retries](#retries)", "[option redispatch](#option%20redispatch)", "[tune.bufsize](#tune.bufsize)" **server** <name> <address>[:[port]] [param\*] ``` Declare a server in a backend ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments : ``` <name> is the internal name assigned to this server. This name will appear in logs and alerts. If "[http-send-name-header](#http-send-name-header)" is set, it will be added to the request header sent to the server. <address> is the IPv4 or IPv6 address of the server. Alternatively, a resolvable hostname is supported, but this name will be resolved during start-up. Address "0.0.0.0" or "*" has a special meaning. It indicates that the connection will be forwarded to the same IP address as the one from the client connection. This is useful in transparent proxy architectures where the client's connection is intercepted and HAProxy must forward to the original destination address. This is more or less what the "[transparent](#option%20transparent)" keyword does except that with a server it's possible to limit concurrency and to report statistics. Optionally, an address family prefix may be used before the address to force the family regardless of the address format, which can be useful to specify a path to a unix socket with no slash ('/'). Currently supported prefixes are : - 'ipv4@' -> address is always IPv4 - 'ipv6@' -> address is always IPv6 - 'unix@' -> address is a path to a local unix socket - 'abns@' -> address is in abstract namespace (Linux only) - 'sockpair@' -> address is the FD of a connected unix socket or of a socketpair. During a connection, the backend creates a pair of connected sockets, and passes one of them over the FD. The bind part will use the received socket as the client FD. Should be used carefully. You may want to reference some environment variables in the address parameter, see [section 2.3](#2.3) about environment variables. The "[init-addr](#init-addr)" setting can be used to modify the way IP addresses should be resolved upon startup. <port> is an optional port specification. If set, all connections will be sent to this port. If unset, the same port the client connected to will be used. The port may also be prefixed by a "+" or a "-". In this case, the server's port will be determined by adding this value to the client's port. <param*> is a list of parameters for this server. The "server" keywords accepts an important number of options and has a complete section dedicated to it. Please refer to [section 5](#5) for more details. ``` Examples : ``` server first 10.1.1.1:1080 cookie first check inter 1000 server second 10.1.1.2:1080 cookie second check inter 1000 server transp ipv4@ server backup "${SRV_BACKUP}:1080" backup server www1_dc1 "${LAN_DC1}.101:80" server www1_dc2 "${LAN_DC2}.101:80" ``` ``` Note: regarding Linux's abstract namespace sockets, HAProxy uses the whole sun_path length is used for the address length. Some other programs such as socat use the string length only by default. Pass the option ",unix-tightsocklen=0" to any abstract socket definition in socat to make it compatible with HAProxy's. ``` **See also:** "default-server", "[http-send-name-header](#http-send-name-header)" and [section 5](#5) about server options **server-state-file-name** [ { use-backend-name | <file> } ] ``` Set the server state file to read, load and apply to servers available in this backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | ``` It only applies when the directive "[load-server-state-from-file](#load-server-state-from-file)" is set to "local". When <file> is not provided, if "use-backend-name" is used or if this directive is not set, then backend name is used. If <file> starts with a slash '/', then it is considered as an absolute path. Otherwise, <file> is concatenated to the global directive "[server-state-base](#server-state-base)". ``` Example: ``` The minimal configuration below would make HAProxy look for the state server file '/etc/haproxy/states/bk':global server-state-file-base /etc/haproxy/states backend bk load-server-state-from-file ``` **See also:** "[server-state-base](#server-state-base)", "[load-server-state-from-file](#load-server-state-from-file)", and "show servers state" **server-template** <prefix> <num | range> <fqdn>[:<port>] [params\*] ``` Set a template to initialize servers with shared parameters. The names of these servers are built from <prefix> and <num | range> parameters. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments: ``` <prefix> A prefix for the server names to be built. <num | range> If <num> is provided, this template initializes <num> servers with 1 up to <num> as server name suffixes. A range of numbers <num_low>-<num_high> may also be used to use <num_low> up to <num_high> as server name suffixes. <fqdn> A FQDN for all the servers this template initializes. <port> Same meaning as "server" <port> argument (see "server" keyword). <params*> Remaining server parameters among all those supported by "server" keyword. ``` Examples: ``` # Initializes 3 servers with srv1, srv2 and srv3 as names, # google.com as FQDN, and health-check enabled. server-template srv 1-3 google.com:80 check # or server-template srv 3 google.com:80 check # would be equivalent to: server srv1 google.com:80 check server srv2 google.com:80 check server srv3 google.com:80 check ``` **source** <addr>[:<port>] [usesrc { <addr2>[:<port2>] | client | clientip } ] **source** <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr\_ip(<hdr>[,<occ>]) } ] **source** <addr>[:<port>] [interface <name>] ``` Set the source address for outgoing connections ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <addr> is the IPv4 address HAProxy will bind to before connecting to a server. This address is also used as a source for health checks. The default value of 0.0.0.0 means that the system will select the most appropriate address to reach its destination. Optionally an address family prefix may be used before the address to force the family regardless of the address format, which can be useful to specify a path to a unix socket with no slash ('/'). Currently supported prefixes are : - 'ipv4@' -> address is always IPv4 - 'ipv6@' -> address is always IPv6 - 'unix@' -> address is a path to a local unix socket - 'abns@' -> address is in abstract namespace (Linux only) You may want to reference some environment variables in the address parameter, see [section 2.3](#2.3) about environment variables. <port> is an optional port. It is normally not needed but may be useful in some very specific contexts. The default value of zero means the system will select a free port. Note that port ranges are not supported in the backend. If you want to force port ranges, you have to specify them on each "server" line. <addr2> is the IP address to present to the server when connections are forwarded in full transparent proxy mode. This is currently only supported on some patched Linux kernels. When this address is specified, clients connecting to the server will be presented with this address, while health checks will still use the address <addr>. <port2> is the optional port to present to the server when connections are forwarded in full transparent proxy mode (see <addr2> above). The default value of zero means the system will select a free port. <hdr> is the name of a HTTP header in which to fetch the IP to bind to. This is the name of a comma-separated header list which can contain multiple IP addresses. By default, the last occurrence is used. This is designed to work with the X-Forwarded-For header and to automatically bind to the client's IP address as seen by previous proxy, typically Stunnel. In order to use another occurrence from the last one, please see the <occ> parameter below. When the header (or occurrence) is not found, no binding is performed so that the proxy's default IP address is used. Also keep in mind that the header name is case insensitive, as for any HTTP header. <occ> is the occurrence number of a value to be used in a multi-value header. This is to be used in conjunction with "hdr_ip(<hdr>)", in order to specify which occurrence to use for the source IP address. Positive values indicate a position from the first occurrence, 1 being the first one. Negative values indicate positions relative to the last one, -1 being the last one. This is helpful for situations where an X-Forwarded-For header is set at the entry point of an infrastructure and must be used several proxy layers away. When this value is not specified, -1 is assumed. Passing a zero here disables the feature. <name> is an optional interface name to which to bind to for outgoing traffic. On systems supporting this features (currently, only Linux), this allows one to bind all traffic to the server to this interface even if it is not the one the system would select based on routing tables. This should be used with extreme care. Note that using this option requires root privileges. ``` ``` The "source" keyword is useful in complex environments where a specific address only is allowed to connect to the servers. It may be needed when a private address must be used through a public gateway for instance, and it is known that the system cannot determine the adequate source address by itself. An extension which is available on certain patched Linux kernels may be used through the "usesrc" optional keyword. It makes it possible to connect to the servers with an IP address which does not belong to the system itself. This is called "full transparent proxy mode". For this to work, the destination servers have to route their traffic back to this address through the machine running HAProxy, and IP forwarding must generally be enabled on this machine. In this "full transparent proxy" mode, it is possible to force a specific IP address to be presented to the servers. This is not much used in fact. A more common use is to tell HAProxy to present the client's IP address. For this, there are two methods : - present the client's IP and port addresses. This is the most transparent mode, but it can cause problems when IP connection tracking is enabled on the machine, because a same connection may be seen twice with different states. However, this solution presents the huge advantage of not limiting the system to the 64k outgoing address+port couples, because all of the client ranges may be used. - present only the client's IP address and select a spare port. This solution is still quite elegant but slightly less transparent (downstream firewalls logs will not match upstream's). It also presents the downside of limiting the number of concurrent connections to the usual 64k ports. However, since the upstream and downstream ports are different, local IP connection tracking on the machine will not be upset by the reuse of the same session. This option sets the default source for all servers in the backend. It may also be specified in a "defaults" section. Finer source address specification is possible at the server level using the "source" server option. Refer to [section 5](#5) for more information. In order to work, "usesrc" requires root privileges. ``` Examples : ``` backend private # Connect to the servers using our 192.168.1.200 source address source 192.168.1.200 backend transparent_ssl1 # Connect to the SSL farm from the client's source address source 192.168.1.200 usesrc clientip backend transparent_ssl2 # Connect to the SSL farm from the client's source address and port # not recommended if IP conntrack is present on the local machine. source 192.168.1.200 usesrc client backend transparent_ssl3 # Connect to the SSL farm from the client's source address. It # is more conntrack-friendly. source 192.168.1.200 usesrc clientip backend transparent_smtp # Connect to the SMTP farm from the client's source address/port # with Tproxy version 4. source 0.0.0.0 usesrc clientip backend transparent_http # Connect to the servers using the client's IP as seen by previous # proxy. source 0.0.0.0 usesrc hdr_ip(x-forwarded-for,-1) ``` **See also :** the "source" server option in [section 5](#5), the Tproxy patches for the Linux kernel on www.balabit.com, the "bind" keyword. **srvtcpka-cnt** <count> ``` Sets the maximum number of keepalive probes TCP should send before dropping the connection on the server side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <count> is the maximum number of keepalive probes. ``` ``` This keyword corresponds to the socket option TCP_KEEPCNT. If this keyword is not specified, system-wide TCP parameter (tcp_keepalive_probes) is used. The availability of this setting depends on the operating system. It is known to work on Linux. ``` **See also :** "[option srvtcpka](#option%20srvtcpka)", "[srvtcpka-idle](#srvtcpka-idle)", "[srvtcpka-intvl](#srvtcpka-intvl)". **srvtcpka-idle** <timeout> ``` Sets the time the connection needs to remain idle before TCP starts sending keepalive probes, if enabled the sending of TCP keepalive packets on the server side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <timeout> is the time the connection needs to remain idle before TCP starts sending keepalive probes. It is specified in seconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` This keyword corresponds to the socket option TCP_KEEPIDLE. If this keyword is not specified, system-wide TCP parameter (tcp_keepalive_time) is used. The availability of this setting depends on the operating system. It is known to work on Linux. ``` **See also :** "[option srvtcpka](#option%20srvtcpka)", "[srvtcpka-cnt](#srvtcpka-cnt)", "[srvtcpka-intvl](#srvtcpka-intvl)". **srvtcpka-intvl** <timeout> ``` Sets the time between individual keepalive probes on the server side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <timeout> is the time between individual keepalive probes. It is specified in seconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` This keyword corresponds to the socket option TCP_KEEPINTVL. If this keyword is not specified, system-wide TCP parameter (tcp_keepalive_intvl) is used. The availability of this setting depends on the operating system. It is known to work on Linux. ``` **See also :** "[option srvtcpka](#option%20srvtcpka)", "[srvtcpka-cnt](#srvtcpka-cnt)", "[srvtcpka-idle](#srvtcpka-idle)". **stats admin** { if | unless } <cond> ``` Enable statistics admin level if/unless a condition is matched ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | yes | ``` This statement enables the statistics admin level if/unless a condition is matched. The admin level allows to enable/disable servers from the web interface. By default, statistics page is read-only for security reasons. Currently, the POST request is limited to the buffer size minus the reserved buffer space, which means that if the list of servers is too long, the request won't be processed. It is recommended to alter few servers at a time. ``` Example : ``` # statistics admin level only for localhost backend stats_localhost stats enable stats admin if LOCALHOST ``` Example : ``` # statistics admin level always enabled because of the authentication backend stats_auth stats enable stats auth admin:AdMiN123 stats admin if TRUE ``` Example : ``` # statistics admin level depends on the authenticated user userlist stats-auth group admin users admin user admin insecure-password AdMiN123 group readonly users haproxy user haproxy insecure-password haproxy backend stats_auth stats enable acl AUTH http_auth(stats-auth) acl AUTH_ADMIN http_auth_group(stats-auth) admin stats http-request auth unless AUTH stats admin if AUTH_ADMIN ``` **See also :** "[stats enable](#stats%20enable)", "[stats auth](#stats%20auth)", "[stats http-request](#stats%20http-request)", [section 3.4](#3.4) about userlists and [section 7](#7) about ACL usage. **stats auth** <user>:<passwd> ``` Enable statistics with authentication and grant access to an account ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <user> is a user name to grant access to <passwd> is the cleartext password associated to this user ``` ``` This statement enables statistics with default settings, and restricts access to declared users only. It may be repeated as many times as necessary to allow as many users as desired. When a user tries to access the statistics without a valid account, a "401 Forbidden" response will be returned so that the browser asks the user to provide a valid user and password. The real which will be returned to the browser is configurable using "[stats realm](#stats%20realm)". Since the authentication method is HTTP Basic Authentication, the passwords circulate in cleartext on the network. Thus, it was decided that the configuration file would also use cleartext passwords to remind the users that those ones should not be sensitive and not shared with any other account. It is also possible to reduce the scope of the proxies which appear in the report using "[stats scope](#stats%20scope)". Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example : ``` # public access (limited to this backend only) backend public_www server srv1 192.168.0.1:80 stats enable stats hide-version stats scope . stats uri /admin?stats stats realm HAProxy\ Statistics stats auth admin1:AdMiN123 stats auth admin2:AdMiN321 # internal monitoring access (unlimited) backend private_monitoring stats enable stats uri /admin?stats stats refresh 5s ``` **See also :** "[stats enable](#stats%20enable)", "[stats realm](#stats%20realm)", "[stats scope](#stats%20scope)", "[stats uri](#stats%20uri)" **stats enable** ``` Enable statistics reporting with default settings ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` This statement enables statistics reporting with default settings defined at build time. Unless stated otherwise, these settings are used : - stats uri : /haproxy?stats - stats realm : "HAProxy Statistics" - stats auth : no authentication - stats scope : no restriction Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example : ``` # public access (limited to this backend only) backend public_www server srv1 192.168.0.1:80 stats enable stats hide-version stats scope . stats uri /admin?stats stats realm HAProxy\ Statistics stats auth admin1:AdMiN123 stats auth admin2:AdMiN321 # internal monitoring access (unlimited) backend private_monitoring stats enable stats uri /admin?stats stats refresh 5s ``` **See also :** "[stats auth](#stats%20auth)", "[stats realm](#stats%20realm)", "[stats uri](#stats%20uri)" **stats hide-version** ``` Enable statistics and hide HAProxy version reporting ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` By default, the stats page reports some useful status information along with the statistics. Among them is HAProxy's version. However, it is generally considered dangerous to report precise version to anyone, as it can help them target known weaknesses with specific attacks. The "[stats hide-version](#stats%20hide-version)" statement removes the version from the statistics report. This is recommended for public sites or any site with a weak login/password. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example : ``` # public access (limited to this backend only) backend public_www server srv1 192.168.0.1:80 stats enable stats hide-version stats scope . stats uri /admin?stats stats realm HAProxy\ Statistics stats auth admin1:AdMiN123 stats auth admin2:AdMiN321 # internal monitoring access (unlimited) backend private_monitoring stats enable stats uri /admin?stats stats refresh 5s ``` **See also :** "[stats auth](#stats%20auth)", "[stats enable](#stats%20enable)", "[stats realm](#stats%20realm)", "[stats uri](#stats%20uri)" **stats http-request** { allow | deny | auth [realm <realm>] } [ { if | unless } <condition> ] ``` Access control for statistics ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | ``` As "http-request", these set of options allow to fine control access to statistics. Each option may be followed by if/unless and acl. First option with matched condition (or option without condition) is final. For "deny" a 403 error will be returned, for "allow" normal processing is performed, for "auth" a 401/407 error code is returned so the client should be asked to enter a username and password. There is no fixed limit to the number of http-request statements per instance. ``` **See also :** "http-request", [section 3.4](#3.4) about userlists and [section 7](#7) about ACL usage. **stats realm** <realm> ``` Enable statistics and set authentication realm ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <realm> is the name of the HTTP Basic Authentication realm reported to the browser. The browser uses it to display it in the pop-up inviting the user to enter a valid username and password. ``` ``` The realm is read as a single word, so any spaces in it should be escaped using a backslash ('\'). This statement is useful only in conjunction with "[stats auth](#stats%20auth)" since it is only related to authentication. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example : ``` # public access (limited to this backend only) backend public_www server srv1 192.168.0.1:80 stats enable stats hide-version stats scope . stats uri /admin?stats stats realm HAProxy\ Statistics stats auth admin1:AdMiN123 stats auth admin2:AdMiN321 # internal monitoring access (unlimited) backend private_monitoring stats enable stats uri /admin?stats stats refresh 5s ``` **See also :** "[stats auth](#stats%20auth)", "[stats enable](#stats%20enable)", "[stats uri](#stats%20uri)" **stats refresh** <delay> ``` Enable statistics with automatic refresh ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <delay> is the suggested refresh delay, specified in seconds, which will be returned to the browser consulting the report page. While the browser is free to apply any delay, it will generally respect it and refresh the page this every seconds. The refresh interval may be specified in any other non-default time unit, by suffixing the unit after the value, as explained at the top of this document. ``` ``` This statement is useful on monitoring displays with a permanent page reporting the load balancer's activity. When set, the HTML report page will include a link "refresh"/"stop refresh" so that the user can select whether they want automatic refresh of the page or not. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example : ``` # public access (limited to this backend only) backend public_www server srv1 192.168.0.1:80 stats enable stats hide-version stats scope . stats uri /admin?stats stats realm HAProxy\ Statistics stats auth admin1:AdMiN123 stats auth admin2:AdMiN321 # internal monitoring access (unlimited) backend private_monitoring stats enable stats uri /admin?stats stats refresh 5s ``` **See also :** "[stats auth](#stats%20auth)", "[stats enable](#stats%20enable)", "[stats realm](#stats%20realm)", "[stats uri](#stats%20uri)" **stats scope** { <name> | "." } ``` Enable statistics and limit access scope ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <name> is the name of a listen, frontend or backend section to be reported. The special name "." (a single dot) designates the section in which the statement appears. ``` ``` When this statement is specified, only the sections enumerated with this statement will appear in the report. All other ones will be hidden. This statement may appear as many times as needed if multiple sections need to be reported. Please note that the name checking is performed as simple string comparisons, and that it is never checked that a give section name really exists. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example : ``` # public access (limited to this backend only) backend public_www server srv1 192.168.0.1:80 stats enable stats hide-version stats scope . stats uri /admin?stats stats realm HAProxy\ Statistics stats auth admin1:AdMiN123 stats auth admin2:AdMiN321 # internal monitoring access (unlimited) backend private_monitoring stats enable stats uri /admin?stats stats refresh 5s ``` **See also :** "[stats auth](#stats%20auth)", "[stats enable](#stats%20enable)", "[stats realm](#stats%20realm)", "[stats uri](#stats%20uri)" **stats show-desc** [ <desc> ] ``` Enable reporting of a description on the statistics page. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | ``` <desc> is an optional description to be reported. If unspecified, the description from global section is automatically used instead. This statement is useful for users that offer shared services to their customers, where node or description should be different for each customer. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. By default description is not shown. ``` Example : ``` # internal monitoring access (unlimited) backend private_monitoring stats enable stats show-desc Master node for Europe, Asia, Africa stats uri /admin?stats stats refresh 5s ``` **See also:** "show-node", "[stats enable](#stats%20enable)", "[stats uri](#stats%20uri)" and "description" in global section. **stats show-legends** ``` Enable reporting additional information on the statistics page ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` Enable reporting additional information on the statistics page : - cap: capabilities (proxy) - mode: one of tcp, http or health (proxy) - id: SNMP ID (proxy, socket, server) - IP (socket, server) - cookie (backend, server) Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. Default behavior is not to show this information. ``` **See also:** "[stats enable](#stats%20enable)", "[stats uri](#stats%20uri)". **stats show-modules** ``` Enable display of extra statistics module on the statistics page ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : none ``` New columns are added at the end of the line containing the extra statistics values as a tooltip. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. Default behavior is not to show this information. ``` **See also:** "[stats enable](#stats%20enable)", "[stats uri](#stats%20uri)". **stats show-node** [ <name> ] ``` Enable reporting of a host name on the statistics page. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments: ``` <name> is an optional name to be reported. If unspecified, the node name from global section is automatically used instead. ``` ``` This statement is useful for users that offer shared services to their customers, where node or description might be different on a stats page provided for each customer. Default behavior is not to show host name. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example: ``` # internal monitoring access (unlimited) backend private_monitoring stats enable stats show-node Europe-1 stats uri /admin?stats stats refresh 5s ``` **See also:** "show-desc", "[stats enable](#stats%20enable)", "[stats uri](#stats%20uri)", and "[node](#node)" in global section. **stats uri** <prefix> ``` Enable statistics and define the URI prefix to access them ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <prefix> is the prefix of any URI which will be redirected to stats. This prefix may contain a question mark ('?') to indicate part of a query string. ``` ``` The statistics URI is intercepted on the relayed traffic, so it appears as a page within the normal application. It is strongly advised to ensure that the selected URI will never appear in the application, otherwise it will never be possible to reach it in the application. The default URI compiled in HAProxy is "/haproxy?stats", but this may be changed at build time, so it's better to always explicitly specify it here. It is generally a good idea to include a question mark in the URI so that intermediate proxies refrain from caching the results. Also, since any string beginning with the prefix will be accepted as a stats request, the question mark helps ensuring that no valid URI will begin with the same words. It is sometimes very convenient to use "/" as the URI prefix, and put that statement in a "listen" instance of its own. That makes it easy to dedicate an address or a port to statistics only. Though this statement alone is enough to enable statistics reporting, it is recommended to set all other settings in order to avoid relying on default unobvious parameters. ``` Example : ``` # public access (limited to this backend only) backend public_www server srv1 192.168.0.1:80 stats enable stats hide-version stats scope . stats uri /admin?stats stats realm HAProxy\ Statistics stats auth admin1:AdMiN123 stats auth admin2:AdMiN321 # internal monitoring access (unlimited) backend private_monitoring stats enable stats uri /admin?stats stats refresh 5s ``` **See also :** "[stats auth](#stats%20auth)", "[stats enable](#stats%20enable)", "[stats realm](#stats%20realm)" **stick match** <pattern> [table <table>] [{if | unless} <cond>] ``` Define a request pattern matching condition to stick a user to a server ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments : ``` <pattern> is a sample expression rule as described in [section 7.3](#7.3). It describes what elements of the incoming request or connection will be analyzed in the hope to find a matching entry in a stickiness table. This rule is mandatory. <table> is an optional stickiness table name. If unspecified, the same backend's table is used. A stickiness table is declared using the "[stick-table](#stick-table)" statement. <cond> is an optional matching condition. It makes it possible to match on a certain criterion only when other conditions are met (or not met). For instance, it could be used to match on a source IP address except when a request passes through a known proxy, in which case we'd match on a header containing that IP address. ``` ``` Some protocols or applications require complex stickiness rules and cannot always simply rely on cookies nor hashing. The "[stick match](#stick%20match)" statement describes a rule to extract the stickiness criterion from an incoming request or connection. See [section 7](#7) for a complete list of possible patterns and transformation rules. The table has to be declared using the "[stick-table](#stick-table)" statement. It must be of a type compatible with the pattern. By default it is the one which is present in the same backend. It is possible to share a table with other backends by referencing it using the "[table](#table)" keyword. If another table is referenced, the server's ID inside the backends are used. By default, all server IDs start at 1 in each backend, so the server ordering is enough. But in case of doubt, it is highly recommended to force server IDs using their "id" setting. It is possible to restrict the conditions where a "[stick match](#stick%20match)" statement will apply, using "if" or "unless" followed by a condition. See [section 7](#7) for ACL based conditions. There is no limit on the number of "[stick match](#stick%20match)" statements. The first that applies and matches will cause the request to be directed to the same server as was used for the request which created the entry. That way, multiple matches can be used as fallbacks. The stick rules are checked after the persistence cookies, so they will not affect stickiness if a cookie has already been used to select a server. That way, it becomes very easy to insert cookies and match on IP addresses in order to maintain stickiness between HTTP and HTTPS. ``` Example : ``` # forward SMTP users to the same server they just used for POP in the # last 30 minutes backend pop mode tcp balance roundrobin stick store-request src stick-table type ip size 200k expire 30m server s1 192.168.1.1:110 server s2 192.168.1.1:110 backend smtp mode tcp balance roundrobin stick match src table pop server s1 192.168.1.1:25 server s2 192.168.1.1:25 ``` **See also :** "[stick-table](#stick-table)", "[stick on](#stick%20on)", and [section 7](#7) about ACLs and samples fetching. **stick on** <pattern> [table <table>] [{if | unless} <condition>] ``` Define a request pattern to associate a user to a server ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | ``` Note : This form is exactly equivalent to "[stick match](#stick%20match)" followed by "[stick store-request](#stick%20store-request)", all with the same arguments. Please refer to both keywords for details. It is only provided as a convenience for writing more maintainable configurations. ``` Examples : ``` # The following form ... stick on src table pop if !localhost # ...is strictly equivalent to this one : stick match src table pop if !localhost stick store-request src table pop if !localhost # Use cookie persistence for HTTP, and stick on source address for HTTPS as # well as HTTP without cookie. Share the same table between both accesses. backend http mode http balance roundrobin stick on src table https cookie SRV insert indirect nocache server s1 192.168.1.1:80 cookie s1 server s2 192.168.1.1:80 cookie s2 backend https mode tcp balance roundrobin stick-table type ip size 200k expire 30m stick on src server s1 192.168.1.1:443 server s2 192.168.1.1:443 ``` **See also :** "[stick match](#stick%20match)", "[stick store-request](#stick%20store-request)". **stick store-request** <pattern> [table <table>] [{if | unless} <condition>] ``` Define a request pattern used to create an entry in a stickiness table ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments : ``` <pattern> is a sample expression rule as described in [section 7.3](#7.3). It describes what elements of the incoming request or connection will be analyzed, extracted and stored in the table once a server is selected. <table> is an optional stickiness table name. If unspecified, the same backend's table is used. A stickiness table is declared using the "[stick-table](#stick-table)" statement. <cond> is an optional storage condition. It makes it possible to store certain criteria only when some conditions are met (or not met). For instance, it could be used to store the source IP address except when the request passes through a known proxy, in which case we'd store a converted form of a header containing that IP address. ``` ``` Some protocols or applications require complex stickiness rules and cannot always simply rely on cookies nor hashing. The "[stick store-request](#stick%20store-request)" statement describes a rule to decide what to extract from the request and when to do it, in order to store it into a stickiness table for further requests to match it using the "[stick match](#stick%20match)" statement. Obviously the extracted part must make sense and have a chance to be matched in a further request. Storing a client's IP address for instance often makes sense. Storing an ID found in a URL parameter also makes sense. Storing a source port will almost never make any sense because it will be randomly matched. See [section 7](#7) for a complete list of possible patterns and transformation rules. The table has to be declared using the "[stick-table](#stick-table)" statement. It must be of a type compatible with the pattern. By default it is the one which is present in the same backend. It is possible to share a table with other backends by referencing it using the "[table](#table)" keyword. If another table is referenced, the server's ID inside the backends are used. By default, all server IDs start at 1 in each backend, so the server ordering is enough. But in case of doubt, it is highly recommended to force server IDs using their "id" setting. It is possible to restrict the conditions where a "[stick store-request](#stick%20store-request)" statement will apply, using "if" or "unless" followed by a condition. This condition will be evaluated while parsing the request, so any criteria can be used. See [section 7](#7) for ACL based conditions. There is no limit on the number of "[stick store-request](#stick%20store-request)" statements, but there is a limit of 8 simultaneous stores per request or response. This makes it possible to store up to 8 criteria, all extracted from either the request or the response, regardless of the number of rules. Only the 8 first ones which match will be kept. Using this, it is possible to feed multiple tables at once in the hope to increase the chance to recognize a user on another protocol or access method. Using multiple store-request rules with the same table is possible and may be used to find the best criterion to rely on, by arranging the rules by decreasing preference order. Only the first extracted criterion for a given table will be stored. All subsequent store- request rules referencing the same table will be skipped and their ACLs will not be evaluated. The "store-request" rules are evaluated once the server connection has been established, so that the table will contain the real server that processed the request. ``` Example : ``` # forward SMTP users to the same server they just used for POP in the # last 30 minutes backend pop mode tcp balance roundrobin stick store-request src stick-table type ip size 200k expire 30m server s1 192.168.1.1:110 server s2 192.168.1.1:110 backend smtp mode tcp balance roundrobin stick match src table pop server s1 192.168.1.1:25 server s2 192.168.1.1:25 ``` **See also :** "[stick-table](#stick-table)", "[stick on](#stick%20on)", about ACLs and sample fetching. **stick-table type** {ip | integer | string [len <length>] | binary [len <length>]} size <size> [expire <expire>] [nopurge] [peers <peersect>] [srvkey <srvkey>] [store <data\_type>]\* ``` Configure the stickiness table for the current section ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | yes | Arguments : ``` ip a table declared with "type ip" will only store IPv4 addresses. This form is very compact (about 50 bytes per entry) and allows very fast entry lookup and stores with almost no overhead. This is mainly used to store client source IP addresses. ipv6 a table declared with "type ipv6" will only store IPv6 addresses. This form is very compact (about 60 bytes per entry) and allows very fast entry lookup and stores with almost no overhead. This is mainly used to store client source IP addresses. integer a table declared with "type integer" will store 32bit integers which can represent a client identifier found in a request for instance. string a table declared with "type string" will store substrings of up to <len> characters. If the string provided by the pattern extractor is larger than <len>, it will be truncated before being stored. During matching, at most <len> characters will be compared between the string in the table and the extracted pattern. When not specified, the string is automatically limited to 32 characters. binary a table declared with "type binary" will store binary blocks of <len> bytes. If the block provided by the pattern extractor is larger than <len>, it will be truncated before being stored. If the block provided by the sample expression is shorter than <len>, it will be padded by 0. When not specified, the block is automatically limited to 32 bytes. <length> is the maximum number of characters that will be stored in a "string" type table (See type "string" above). Or the number of bytes of the block in "binary" type table. Be careful when changing this parameter as memory usage will proportionally increase. <size> is the maximum number of entries that can fit in the table. This value directly impacts memory usage. Count approximately 50 bytes per entry, plus the size of a string if any. The size supports suffixes "k", "m", "g" for 2^10, 2^20 and 2^30 factors. [nopurge] indicates that we refuse to purge older entries when the table is full. When not specified and the table is full when HAProxy wants to store an entry in it, it will flush a few of the oldest entries in order to release some space for the new ones. This is most often the desired behavior. In some specific cases, it be desirable to refuse new entries instead of purging the older ones. That may be the case when the amount of data to store is far above the hardware limits and we prefer not to offer access to new clients than to reject the ones already connected. When using this parameter, be sure to properly set the "expire" parameter (see below). <peersect> is the name of the peers section to use for replication. Entries which associate keys to server IDs are kept synchronized with the remote peers declared in this section. All entries are also automatically learned from the local peer (old process) during a soft restart. <expire> defines the maximum duration of an entry in the table since it was last created, refreshed using 'track-sc' or matched using 'stick match' or 'stick on' rule. The expiration delay is defined using the standard time format, similarly as the various timeouts. The maximum duration is slightly above 24 days. See [section 2.5](#2.5) for more information. If this delay is not specified, the session won't automatically expire, but older entries will be removed once full. Be sure not to use the "nopurge" parameter if not expiration delay is specified. Note: 'table_*' converters performs lookups but won't update touch expire since they don't require 'track-sc'. <srvkey> specifies how each server is identified for the purposes of the stick table. The valid values are "[name](#name)" and "[addr](#addr)". If "[name](#name)" is given, then <name> argument for the server (may be generated by a template). If "[addr](#addr)" is given, then the server is identified by its current network address, including the port. "[addr](#addr)" is especially useful if you are using service discovery to generate the addresses for servers with peered stick-tables and want to consistently use the same host across peers for a stickiness token. <data_type> is used to store additional information in the stick-table. This may be used by ACLs in order to control various criteria related to the activity of the client matching the stick-table. For each item specified here, the size of each entry will be inflated so that the additional data can fit. Several data types may be stored with an entry. Multiple data types may be specified after the "store" keyword, as a comma-separated list. Alternatively, it is possible to repeat the "store" keyword followed by one or several data types. Except for the "server_id" type which is automatically detected and enabled, all data types must be explicitly declared to be stored. If an ACL references a data type which is not stored, the ACL will simply not match. Some data types require an argument which must be passed just after the type between parenthesis. See below for the supported data types and their arguments. ``` ``` The data types that can be stored with an entry are the following : - server_id : this is an integer which holds the numeric ID of the server a request was assigned to. It is used by the "[stick match](#stick%20match)", "stick store", and "[stick on](#stick%20on)" rules. It is automatically enabled when referenced. - gpc(<nb>) : General Purpose Counters Array of <nb> elements. This is an array of positive 32-bit integers which may be used to count anything. Most of the time they will be used as a incremental counters on some entries, for instance to note that a limit is reached and trigger some actions. This array is limited to a maximum of 100 elements: gpc0 to gpc99, to ensure that the build of a peer update message can fit into the buffer. Users should take in consideration that a large amount of counters will increase the data size and the traffic load using peers protocol since all data/counters are pushed each time any of them is updated. This data_type will exclude the usage of the legacy data_types 'gpc0' and 'gpc1' on the same table. Using the 'gpc' array data_type, all 'gpc0' and 'gpc1' related fetches and actions will apply to the two first elements of this array. - gpc_rate(<nb>,<period>) : Array of increment rates of General Purpose Counters over a period. Those elements are positive 32-bit integers which may be used for anything. Just like <gpc>, the count events, but instead of keeping a cumulative number, they maintain the rate at which the counter is incremented. Most of the time it will be used to measure the frequency of occurrence of certain events (e.g. requests to a specific URL). This array is limited to a maximum of 100 elements: gpt(100) allowing the storage of gpc0 to gpc99, to ensure that the build of a peer update message can fit into the buffer. The array cannot contain less than 1 element: use gpc(1) if you want to store only the counter gpc0. Users should take in consideration that a large amount of counters will increase the data size and the traffic load using peers protocol since all data/counters are pushed each time any of them is updated. This data_type will exclude the usage of the legacy data_types 'gpc0_rate' and 'gpc1_rate' on the same table. Using the 'gpc_rate' array data_type, all 'gpc0' and 'gpc1' related fetches and actions will apply to the two first elements of this array. - gpc0 : first General Purpose Counter. It is a positive 32-bit integer integer which may be used for anything. Most of the time it will be used to put a special tag on some entries, for instance to note that a specific behavior was detected and must be known for future matches. - gpc0_rate(<period>) : increment rate of the first General Purpose Counter over a period. It is a positive 32-bit integer integer which may be used for anything. Just like <gpc0>, it counts events, but instead of keeping a cumulative number, it maintains the rate at which the counter is incremented. Most of the time it will be used to measure the frequency of occurrence of certain events (e.g. requests to a specific URL). - gpc1 : second General Purpose Counter. It is a positive 32-bit integer integer which may be used for anything. Most of the time it will be used to put a special tag on some entries, for instance to note that a specific behavior was detected and must be known for future matches. - gpc1_rate(<period>) : increment rate of the second General Purpose Counter over a period. It is a positive 32-bit integer integer which may be used for anything. Just like <gpc1>, it counts events, but instead of keeping a cumulative number, it maintains the rate at which the counter is incremented. Most of the time it will be used to measure the frequency of occurrence of certain events (e.g. requests to a specific URL). - gpt(<nb>) : General Purpose Tags Array of <nb> elements. This is an array of positive 32-bit integers which may be used for anything. Most of the time they will be used to put a special tags on some entries, for instance to note that a specific behavior was detected and must be known for future matches. This array is limited to a maximum of 100 elements: gpt(100) allowing the storage of gpt0 to gpt99, to ensure that the build of a peer update message can fit into the buffer. The array cannot contain less than 1 element: use gpt(1) if you want to to store only the tag gpt0. Users should take in consideration that a large amount of counters will increase the data size and the traffic load using peers protocol since all data/counters are pushed each time any of them is updated. This data_type will exclude the usage of the legacy data_type 'gpt0' on the same table. Using the 'gpt' array data_type, all 'gpt0' related fetches and actions will apply to the first element of this array. - gpt0 : first General Purpose Tag. It is a positive 32-bit integer integer which may be used for anything. Most of the time it will be used to put a special tag on some entries, for instance to note that a specific behavior was detected and must be known for future matches - conn_cnt : Connection Count. It is a positive 32-bit integer which counts the absolute number of connections received from clients which matched this entry. It does not mean the connections were accepted, just that they were received. - conn_cur : Current Connections. It is a positive 32-bit integer which stores the concurrent connection counts for the entry. It is incremented once an incoming connection matches the entry, and decremented once the connection leaves. That way it is possible to know at any time the exact number of concurrent connections for an entry. - conn_rate(<period>) : frequency counter (takes 12 bytes). It takes an integer parameter <period> which indicates in milliseconds the length of the period over which the average is measured. It reports the average incoming connection rate over that period, in connections per period. The result is an integer which can be matched using ACLs. - sess_cnt : Session Count. It is a positive 32-bit integer which counts the absolute number of sessions received from clients which matched this entry. A session is a connection that was accepted by the layer 4 rules. - sess_rate(<period>) : frequency counter (takes 12 bytes). It takes an integer parameter <period> which indicates in milliseconds the length of the period over which the average is measured. It reports the average incoming session rate over that period, in sessions per period. The result is an integer which can be matched using ACLs. - http_req_cnt : HTTP request Count. It is a positive 32-bit integer which counts the absolute number of HTTP requests received from clients which matched this entry. It does not matter whether they are valid requests or not. Note that this is different from sessions when keep-alive is used on the client side. - http_req_rate(<period>) : frequency counter (takes 12 bytes). It takes an integer parameter <period> which indicates in milliseconds the length of the period over which the average is measured. It reports the average HTTP request rate over that period, in requests per period. The result is an integer which can be matched using ACLs. It does not matter whether they are valid requests or not. Note that this is different from sessions when keep-alive is used on the client side. - http_err_cnt : HTTP Error Count. It is a positive 32-bit integer which counts the absolute number of HTTP requests errors induced by clients which matched this entry. Errors are counted on invalid and truncated requests, as well as on denied or tarpitted requests, and on failed authentications. If the server responds with 4xx, then the request is also counted as an error since it's an error triggered by the client (e.g. vulnerability scan). - http_err_rate(<period>) : frequency counter (takes 12 bytes). It takes an integer parameter <period> which indicates in milliseconds the length of the period over which the average is measured. It reports the average HTTP request error rate over that period, in requests per period (see http_err_cnt above for what is accounted as an error). The result is an integer which can be matched using ACLs. - http_fail_cnt : HTTP Failure Count. It is a positive 32-bit integer which counts the absolute number of HTTP response failures induced by servers which matched this entry. Errors are counted on invalid and truncated responses, as well as any 5xx response other than 501 or 505. It aims at being used combined with path or URI to detect service failures. - http_fail_rate(<period>) : frequency counter (takes 12 bytes). It takes an integer parameter <period> which indicates in milliseconds the length of the period over which the average is measured. It reports the average HTTP response failure rate over that period, in requests per period (see http_fail_cnt above for what is accounted as a failure). The result is an integer which can be matched using ACLs. - bytes_in_cnt : client to server byte count. It is a positive 64-bit integer which counts the cumulative number of bytes received from clients which matched this entry. Headers are included in the count. This may be used to limit abuse of upload features on photo or video servers. - bytes_in_rate(<period>) : frequency counter (takes 12 bytes). It takes an integer parameter <period> which indicates in milliseconds the length of the period over which the average is measured. It reports the average incoming bytes rate over that period, in bytes per period. It may be used to detect users which upload too much and too fast. Warning: with large uploads, it is possible that the amount of uploaded data will be counted once upon termination, thus causing spikes in the average transfer speed instead of having a smooth one. This may partially be smoothed with "[option contstats](#option%20contstats)" though this is not perfect yet. Use of byte_in_cnt is recommended for better fairness. - bytes_out_cnt : server to client byte count. It is a positive 64-bit integer which counts the cumulative number of bytes sent to clients which matched this entry. Headers are included in the count. This may be used to limit abuse of bots sucking the whole site. - bytes_out_rate(<period>) : frequency counter (takes 12 bytes). It takes an integer parameter <period> which indicates in milliseconds the length of the period over which the average is measured. It reports the average outgoing bytes rate over that period, in bytes per period. It may be used to detect users which download too much and too fast. Warning: with large transfers, it is possible that the amount of transferred data will be counted once upon termination, thus causing spikes in the average transfer speed instead of having a smooth one. This may partially be smoothed with "[option contstats](#option%20contstats)" though this is not perfect yet. Use of byte_out_cnt is recommended for better fairness. There is only one stick-table per proxy. At the moment of writing this doc, it does not seem useful to have multiple tables per proxy. If this happens to be required, simply create a dummy backend with a stick-table in it and reference it. It is important to understand that stickiness based on learning information has some limitations, including the fact that all learned associations are lost upon restart unless peers are properly configured to transfer such information upon restart (recommended). In general it can be good as a complement but not always as an exclusive stickiness. Last, memory requirements may be important when storing many data types. Indeed, storing all indicators above at once in each entry requires 116 bytes per entry, or 116 MB for a 1-million entries table. This is definitely not something that can be ignored. ``` Example: ``` # Keep track of counters of up to 1 million IP addresses over 5 minutes # and store a general purpose counter and the average connection rate # computed over a sliding window of 30 seconds. stick-table type ip size 1m expire 5m store gpc0,conn_rate(30s) ``` **See also :** "[stick match](#stick%20match)", "[stick on](#stick%20on)", "[stick store-request](#stick%20store-request)", [section 2.5](#2.5) about time format and [section 7](#7) about ACLs. **stick store-response** <pattern> [table <table>] [{if | unless} <condition>] ``` Define a response pattern used to create an entry in a stickiness table ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments : ``` <pattern> is a sample expression rule as described in [section 7.3](#7.3). It describes what elements of the response or connection will be analyzed, extracted and stored in the table once a server is selected. <table> is an optional stickiness table name. If unspecified, the same backend's table is used. A stickiness table is declared using the "[stick-table](#stick-table)" statement. <cond> is an optional storage condition. It makes it possible to store certain criteria only when some conditions are met (or not met). For instance, it could be used to store the SSL session ID only when the response is a SSL server hello. ``` ``` Some protocols or applications require complex stickiness rules and cannot always simply rely on cookies nor hashing. The "[stick store-response](#stick%20store-response)" statement describes a rule to decide what to extract from the response and when to do it, in order to store it into a stickiness table for further requests to match it using the "[stick match](#stick%20match)" statement. Obviously the extracted part must make sense and have a chance to be matched in a further request. Storing an ID found in a header of a response makes sense. See [section 7](#7) for a complete list of possible patterns and transformation rules. The table has to be declared using the "[stick-table](#stick-table)" statement. It must be of a type compatible with the pattern. By default it is the one which is present in the same backend. It is possible to share a table with other backends by referencing it using the "[table](#table)" keyword. If another table is referenced, the server's ID inside the backends are used. By default, all server IDs start at 1 in each backend, so the server ordering is enough. But in case of doubt, it is highly recommended to force server IDs using their "id" setting. It is possible to restrict the conditions where a "[stick store-response](#stick%20store-response)" statement will apply, using "if" or "unless" followed by a condition. This condition will be evaluated while parsing the response, so any criteria can be used. See [section 7](#7) for ACL based conditions. There is no limit on the number of "[stick store-response](#stick%20store-response)" statements, but there is a limit of 8 simultaneous stores per request or response. This makes it possible to store up to 8 criteria, all extracted from either the request or the response, regardless of the number of rules. Only the 8 first ones which match will be kept. Using this, it is possible to feed multiple tables at once in the hope to increase the chance to recognize a user on another protocol or access method. Using multiple store-response rules with the same table is possible and may be used to find the best criterion to rely on, by arranging the rules by decreasing preference order. Only the first extracted criterion for a given table will be stored. All subsequent store- response rules referencing the same table will be skipped and their ACLs will not be evaluated. However, even if a store-request rule references a table, a store-response rule may also use the same table. This means that each table may learn exactly one element from the request and one element from the response at once. The table will contain the real server that processed the request. ``` Example : ``` # Learn SSL session ID from both request and response and create affinity. backend https mode tcp balance roundrobin # maximum SSL session ID length is 32 bytes. stick-table type binary len 32 size 30k expire 30m acl clienthello req.ssl_hello_type 1 acl serverhello rep.ssl_hello_type 2 # use tcp content accepts to detects ssl client and server hello. tcp-request inspect-delay 5s tcp-request content accept if clienthello # no timeout on response inspect delay by default. tcp-response content accept if serverhello # SSL session ID (SSLID) may be present on a client or server hello. # Its length is coded on 1 byte at offset 43 and its value starts # at offset 44. # Match and learn on request if client hello. stick on req.payload_lv(43,1) if clienthello # Learn on response if server hello. stick store-response resp.payload_lv(43,1) if serverhello server s1 192.168.1.1:443 server s2 192.168.1.1:443 ``` **See also :** "[stick-table](#stick-table)", "[stick on](#stick%20on)", and [section 7](#7) about ACLs and pattern extraction. **tcp-check comment** <string> ``` Defines a comment for the following the tcp-check rule, reported in logs if it fails. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <string> is the comment message to add in logs if the following tcp-check rule fails. ``` ``` It only works for connect, send and expect rules. It is useful to make user-friendly error reporting. ``` **See also :** "[option tcp-check](#option%20tcp-check)", "[tcp-check connect](#tcp-check%20connect)", "[tcp-check send](#tcp-check%20send)" and "[tcp-check expect](#tcp-check%20expect)". **tcp-check connect** [default] [port <expr>] [addr <ip>] [send-proxy] [via-socks4] [ssl] [sni <sni>] [alpn <alpn>] [linger] [proto <name>] [comment <msg>] ``` Opens a new connection ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` comment <msg> defines a message to report if the rule evaluation fails. default Use default options of the server line to do the health checks. The server options are used only if not redefined. port <expr> if not set, check port or server port is used. It tells HAProxy where to open the connection to. <port> must be a valid TCP port source integer, from 1 to 65535 or an sample-fetch expression. addr <ip> defines the IP address to do the health check. send-proxy send a PROXY protocol string via-socks4 enables outgoing health checks using upstream socks4 proxy. ssl opens a ciphered connection sni <sni> specifies the SNI to use to do health checks over SSL. alpn <alpn> defines which protocols to advertise with ALPN. The protocol list consists in a comma-delimited list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). If it is not set, the server ALPN is used. proto <name> forces the multiplexer's protocol to use for this connection. It must be a TCP mux protocol and it must be usable on the backend side. The list of available protocols is reported in haproxy -vv. linger cleanly close the connection instead of using a single RST. ``` ``` When an application lies on more than a single TCP port or when HAProxy load-balance many services in a single backend, it makes sense to probe all the services individually before considering a server as operational. When there are no TCP port configured on the server line neither server port directive, then the 'tcp-check connect port <port>' must be the first step of the sequence. In a tcp-check ruleset a 'connect' is required, it is also mandatory to start the ruleset with a 'connect' rule. Purpose is to ensure admin know what they do. When a connect must start the ruleset, if may still be preceded by set-var, unset-var or comment rules. ``` Examples : ``` # check HTTP and HTTPs services on a server. # first open port 80 thanks to server line port directive, then # tcp-check opens port 443, ciphered and run a request on it: option tcp-check tcp-check connect tcp-check send GET\ /\ HTTP/1.0\r\n tcp-check send Host:\ haproxy.1wt.eu\r\n tcp-check send \r\n tcp-check expect rstring (2..|3..) tcp-check connect port 443 ssl tcp-check send GET\ /\ HTTP/1.0\r\n tcp-check send Host:\ haproxy.1wt.eu\r\n tcp-check send \r\n tcp-check expect rstring (2..|3..) server www 10.0.0.1 check port 80 # check both POP and IMAP from a single server: option tcp-check tcp-check connect port 110 linger tcp-check expect string +OK\ POP3\ ready tcp-check connect port 143 tcp-check expect string *\ OK\ IMAP4\ ready server mail 10.0.0.1 check ``` **See also :** "[option tcp-check](#option%20tcp-check)", "[tcp-check send](#tcp-check%20send)", "[tcp-check expect](#tcp-check%20expect)" **tcp-check expect** [min-recv <int>] [comment <msg>] [ok-status <st>] [error-status <st>] [tout-status <st>] [on-success <fmt>] [on-error <fmt>] [status-code <expr>] [!] <match> <pattern> ``` Specify data to be collected and analyzed during a generic health check ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` comment <msg> defines a message to report if the rule evaluation fails. min-recv is optional and can define the minimum amount of data required to evaluate the current expect rule. If the number of received bytes is under this limit, the check will wait for more data. This option can be used to resolve some ambiguous matching rules or to avoid executing costly regex matches on content known to be still incomplete. If an exact string (string or binary) is used, the minimum between the string length and this parameter is used. This parameter is ignored if it is set to -1. If the expect rule does not match, the check will wait for more data. If set to 0, the evaluation result is always conclusive. <match> is a keyword indicating how to look for a specific pattern in the response. The keyword may be one of "string", "rstring", "binary" or "rbinary". The keyword may be preceded by an exclamation mark ("!") to negate the match. Spaces are allowed between the exclamation mark and the keyword. See below for more details on the supported keywords. ok-status <st> is optional and can be used to set the check status if the expect rule is successfully evaluated and if it is the last rule in the tcp-check ruleset. "L7OK", "L7OKC", "L6OK" and "L4OK" are supported : - L7OK : check passed on layer 7 - L7OKC : check conditionally passed on layer 7, set server to NOLB state. - L6OK : check passed on layer 6 - L4OK : check passed on layer 4 By default "L7OK" is used. error-status <st> is optional and can be used to set the check status if an error occurred during the expect rule evaluation. "L7OKC", "L7RSP", "L7STS", "L6RSP" and "L4CON" are supported : - L7OKC : check conditionally passed on layer 7, set server to NOLB state. - L7RSP : layer 7 invalid response - protocol error - L7STS : layer 7 response error, for example HTTP 5xx - L6RSP : layer 6 invalid response - protocol error - L4CON : layer 1-4 connection problem By default "L7RSP" is used. tout-status <st> is optional and can be used to set the check status if a timeout occurred during the expect rule evaluation. "L7TOUT", "L6TOUT", and "L4TOUT" are supported : - L7TOUT : layer 7 (HTTP/SMTP) timeout - L6TOUT : layer 6 (SSL) timeout - L4TOUT : layer 1-4 timeout By default "L7TOUT" is used. on-success <fmt> is optional and can be used to customize the informational message reported in logs if the expect rule is successfully evaluated and if it is the last rule in the tcp-check ruleset. <fmt> is a log-format string. on-error <fmt> is optional and can be used to customize the informational message reported in logs if an error occurred during the expect rule evaluation. <fmt> is a log-format string. status-code <expr> is optional and can be used to set the check status code reported in logs, on success or on error. <expr> is a standard HAProxy expression formed by a sample-fetch followed by some converters. <pattern> is the pattern to look for. It may be a string or a regular expression. If the pattern contains spaces, they must be escaped with the usual backslash ('\'). If the match is set to binary, then the pattern must be passed as a series of hexadecimal digits in an even number. Each sequence of two digits will represent a byte. The hexadecimal digits may be used upper or lower case. ``` ``` The available matches are intentionally similar to their http-check cousins : string <string> : test the exact string matches in the response buffer. A health check response will be considered valid if the response's buffer contains this exact string. If the "string" keyword is prefixed with "!", then the response will be considered invalid if the body contains this string. This can be used to look for a mandatory pattern in a protocol response, or to detect a failure when a specific error appears in a protocol banner. rstring <regex> : test a regular expression on the response buffer. A health check response will be considered valid if the response's buffer matches this expression. If the "rstring" keyword is prefixed with "!", then the response will be considered invalid if the body matches the expression. string-lf <fmt> : test a log-format string match in the response's buffer. A health check response will be considered valid if the response's buffer contains the string resulting of the evaluation of <fmt>, which follows the log-format rules. If prefixed with "!", then the response will be considered invalid if the buffer contains the string. binary <hexstring> : test the exact string in its hexadecimal form matches in the response buffer. A health check response will be considered valid if the response's buffer contains this exact hexadecimal string. Purpose is to match data on binary protocols. rbinary <regex> : test a regular expression on the response buffer, like "rstring". However, the response buffer is transformed into its hexadecimal form, including NUL-bytes. This allows using all regex engines to match any binary content. The hexadecimal transformation takes twice the size of the original response. As such, the expected pattern should work on at-most half the response buffer size. binary-lf <hexfmt> : test a log-format string in its hexadecimal form match in the response's buffer. A health check response will be considered valid if the response's buffer contains the hexadecimal string resulting of the evaluation of <fmt>, which follows the log-format rules. If prefixed with "!", then the response will be considered invalid if the buffer contains the hexadecimal string. The hexadecimal string is converted in a binary string before matching the response's buffer. It is important to note that the responses will be limited to a certain size defined by the global "[tune.bufsize](#tune.bufsize)" option, which defaults to 16384 bytes. Thus, too large responses may not contain the mandatory pattern when using "string", "rstring" or binary. If a large response is absolutely required, it is possible to change the default max size by setting the global variable. However, it is worth keeping in mind that parsing very large responses can waste some CPU cycles, especially when regular expressions are used, and that it is always better to focus the checks on smaller resources. Also, in its current state, the check will not find any string nor regex past a null character in the response. Similarly it is not possible to request matching the null character. ``` Examples : ``` # perform a POP check option tcp-check tcp-check expect string +OK\ POP3\ ready # perform an IMAP check option tcp-check tcp-check expect string *\ OK\ IMAP4\ ready # look for the redis master server option tcp-check tcp-check send PING\r\n tcp-check expect string +PONG tcp-check send info\ replication\r\n tcp-check expect string role:master tcp-check send QUIT\r\n tcp-check expect string +OK ``` **See also :** "[option tcp-check](#option%20tcp-check)", "[tcp-check connect](#tcp-check%20connect)", "[tcp-check send](#tcp-check%20send)", "[tcp-check send-binary](#tcp-check%20send-binary)", "[http-check expect](#http-check%20expect)", tune.bufsize **tcp-check send** <data> [comment <msg>] **tcp-check send-lf** <fmt> [comment <msg>] ``` Specify a string or a log-format string to be sent as a question during a generic health check ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` comment <msg> defines a message to report if the rule evaluation fails. <data> is the string that will be sent during a generic health check session. <fmt> is the log-format string that will be sent, once evaluated, during a generic health check session. ``` Examples : ``` # look for the redis master server option tcp-check tcp-check send info\ replication\r\n tcp-check expect string role:master ``` **See also :** "[option tcp-check](#option%20tcp-check)", "[tcp-check connect](#tcp-check%20connect)", "[tcp-check expect](#tcp-check%20expect)", "[tcp-check send-binary](#tcp-check%20send-binary)", tune.bufsize **tcp-check send-binary** <hexstring> [comment <msg>] **tcp-check send-binary-lf** <hexfmt> [comment <msg>] ``` Specify an hex digits string or an hex digits log-format string to be sent as a binary question during a raw tcp health check ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` comment <msg> defines a message to report if the rule evaluation fails. <hexstring> is the hexadecimal string that will be send, once converted to binary, during a generic health check session. <hexfmt> is the hexadecimal log-format string that will be send, once evaluated and converted to binary, during a generic health check session. ``` Examples : ``` # redis check in binary option tcp-check tcp-check send-binary 50494e470d0a # PING\r\n tcp-check expect binary 2b504F4e47 # +PONG ``` **See also :** "[option tcp-check](#option%20tcp-check)", "[tcp-check connect](#tcp-check%20connect)", "[tcp-check expect](#tcp-check%20expect)", "[tcp-check send](#tcp-check%20send)", tune.bufsize ``` tcp-check set-var(<var-name>[,<cond> ...]) <expr> tcp-check set-var-fmt(<var-name>[,<cond> ...]) <fmt> This operation sets the content of a variable. The variable is declared inline. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <var-name> The name of the variable starts with an indication about its scope. The scopes allowed for tcp-check are: "[proc](#proc)" : the variable is shared with the whole process. "sess" : the variable is shared with the tcp-check session. "[check](#check)": the variable is declared for the lifetime of the tcp-check. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.', and '-'. <cond> A set of conditions that must all be true for the variable to actually be set (such as "ifnotempty", "ifgt" ...). See the set-var converter's description for a full list of possible conditions. <expr> Is a sample-fetch expression potentially followed by converters. <fmt> This is the value expressed using log-format rules (see Custom Log Format in [section 8.2.4](#8.2.4)). ``` Examples : ``` tcp-check set-var(check.port) int(1234) tcp-check set-var-fmt(check.name) "%H" ``` **tcp-check unset-var**(<var-name>) ``` Free a reference to a variable within its scope. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <var-name> The name of the variable starts with an indication about its scope. The scopes allowed for tcp-check are: "[proc](#proc)" : the variable is shared with the whole process. "sess" : the variable is shared with the tcp-check session. "[check](#check)": the variable is declared for the lifetime of the tcp-check. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.', and '-'. ``` Examples : ``` tcp-check unset-var(check.port) ``` **tcp-request connection** <action> <options...> [ { if | unless } <condition> ] ``` Perform an action on an incoming connection depending on a layer 4 condition ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | no | Arguments : ``` <action> defines the action to perform if the condition applies. See below. <condition> is a standard layer4-only ACL-based condition (see [section 7](#7)). ``` ``` Immediately after acceptance of a new incoming connection, it is possible to evaluate some conditions to decide whether this connection must be accepted or dropped or have its counters tracked. Those conditions cannot make use of any data contents because the connection has not been read from yet, and the buffers are not yet allocated. This is used to selectively and very quickly accept or drop connections from various sources with a very low overhead. If some contents need to be inspected in order to take the decision, the "[tcp-request content](#tcp-request%20content)" statements must be used instead. The "[tcp-request connection](#tcp-request%20connection)" rules are evaluated in their exact declaration order. If no rule matches or if there is no rule, the default action is to accept the incoming connection. There is no specific limit to the number of rules which may be inserted. Any rule may optionally be followed by an ACL-based condition, in which case it will only be evaluated if the condition is true. The first keyword is the rule's action. Several types of actions are supported: - accept - expect-netscaler-cip layer4 - expect-proxy layer4 - reject - sc-inc-gpc(<idx>,<sc-id>) - sc-inc-gpc0(<sc-id>) - sc-inc-gpc1(<sc-id>) - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } - sc-set-gpt0(<sc-id>) { <int> | <expr> } - set-dst <expr> - set-dst-port <expr> - set-mark <mark> - set-src <expr> - set-src-port <expr> - set-tos <tos> - set-var(<var-name>[,<cond> ...]) <expr> - set-var-fmt(<var-name>[,<cond> ...]) <fmt> - silent-drop - track-sc0 <key> [table <table>] - track-sc1 <key> [table <table>] - track-sc2 <key> [table <table>] - unset-var(<var-name>) The supported actions are described below. There is no limit to the number of "[tcp-request connection](#tcp-request%20connection)" statements per instance. This directive is only available from named defaults sections, not anonymous ones. Rules defined in the defaults section are evaluated before ones in the associated proxy section. To avoid ambiguities, in this case the same defaults section cannot be used by proxies with the frontend capability and by proxies with the backend capability. It means a listen section cannot use a defaults section defining such rules. Note that the "if/unless" condition is optional. If no condition is set on the action, it is simply performed unconditionally. That can be useful for "track-sc*" actions as well as for changing the default action to a reject. ``` Example: ``` Accept all connections from white-listed hosts, reject too fast connection without counting them, and track accepted connections. This results in connection rate being capped from abusive sources.tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } tcp-request connection reject if { src_conn_rate gt 10 } tcp-request connection track-sc0 src ``` Example: ``` Accept all connections from white-listed hosts, count all other connections and reject too fast ones. This results in abusive ones being blocked as long as they don't slow down.tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } tcp-request connection track-sc0 src tcp-request connection reject if { sc0_conn_rate gt 10 } ``` Example: ``` Enable the PROXY protocol for traffic coming from all known proxies.tcp-request connection expect-proxy layer4 if { src -f proxies.lst } ``` ``` See [section 7](#7) about ACL usage. ``` **See also :** "[tcp-request session](#tcp-request%20session)", "[tcp-request content](#tcp-request%20content)", "[stick-table](#stick-table)" **tcp-request connection accept** [ { if | unless } <condition> ] ``` This is used to accept the connection. No further "[tcp-request connection](#tcp-request%20connection)" rules are evaluated. ``` **tcp-request connection expect-netscaler-cip layer4** [ { if | unless } <condition> ] ``` This configures the client-facing connection to receive a NetScaler Client IP insertion protocol header before any byte is read from the socket. This is equivalent to having the "[accept-netscaler-cip](#accept-netscaler-cip)" keyword on the "bind" line, except that using the TCP rule allows the PROXY protocol to be accepted only for certain IP address ranges using an ACL. This is convenient when multiple layers of load balancers are passed through by traffic coming from public hosts. ``` **tcp-request connection expect-proxy layer4** [ { if | unless } <condition> ] ``` This configures the client-facing connection to receive a PROXY protocol header before any byte is read from the socket. This is equivalent to having the "[accept-proxy](#accept-proxy)" keyword on the "bind" line, except that using the TCP rule allows the PROXY protocol to be accepted only for certain IP address ranges using an ACL. This is convenient when multiple layers of load balancers are passed through by traffic coming from public hosts. ``` **tcp-request connection reject** [ { if | unless } <condition> ] ``` This is used to reject the connection. No further "[tcp-request connection](#tcp-request%20connection)" rules are evaluated. Rejected connections do not even become a session, which is why they are accounted separately for in the stats, as "denied connections". They are not considered for the session rate-limit and are not logged either. The reason is that these rules should only be used to filter extremely high connection rates such as the ones encountered during a massive DDoS attack. Under these extreme conditions, the simple action of logging each event would make the system collapse and would considerably lower the filtering capacity. If logging is absolutely desired, then "tcp-request content" rules should be used instead, as "[tcp-request session](#tcp-request%20session)" rules will not log either. ``` **tcp-request connection sc-inc-gpc**(<idx>,<sc-id>) [ { if | unless } <condition> ] **tcp-request connection sc-inc-gpc0**(<sc-id>) [ { if | unless } <condition> ] **tcp-request connection sc-inc-gpc1**(<sc-id>) [ { if | unless } <condition> ] ``` These actions increment the General Purppose Counters according to the sticky counter designated by <sc-id>. Please refer to "[http-request sc-inc-gpc](#http-request%20sc-inc-gpc)", "[http-request sc-inc-gpc0](#http-request%20sc-inc-gpc0)" and "[http-request sc-inc-gpc1](#http-request%20sc-inc-gpc1)" for a complete description. ``` **tcp-request connection sc-set-gpt**(<idx>,<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] **tcp-request connection sc-set-gpt0**(<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] ``` These actions set the 32-bit unsigned General Purpose Tags according to the sticky counter designated by <sc-id>. Please refer to "http-request sc-inc-gpt" and "http-request sc-inc-gpt0" for a complete description. ``` **tcp-request connection set-dst** <expr> [ { if | unless } <condition> ] **tcp-request connection set-dst-port** <expr> [ { if | unless } <condition> ] ``` These actions are used to set the destination IP/Port address to the value of specified expression. Please refer to "[http-request set-dst](#http-request%20set-dst)" and "[http-request set-dst-port](#http-request%20set-dst-port)" for a complete description. ``` **tcp-request connection set-mark** <mark> [ { if | unless } <condition> ] ``` This action is used to set the Netfilter/IPFW MARK in all packets sent to the client to the value passed in <mark> on platforms which support it. Please refer to "[http-request set-mark](#http-request%20set-mark)" for a complete description. ``` **tcp-request connection set-src** <expr> [ { if | unless } <condition> ] **tcp-request connection set-src-port** <expr> [ { if | unless } <condition> ] ``` These actions are used to set the source IP/Port address to the value of specified expression. Please refer to "[http-request set-src](#http-request%20set-src)" and "[http-request set-src-port](#http-request%20set-src-port)" for a complete description. ``` **tcp-request connection set-tos** <tos> [ { if | unless } <condition> ] ``` This is used to set the TOS or DSCP field value of packets sent to the client to the value passed in <tos> on platforms which support this. Please refer to "[http-request set-tos](#http-request%20set-tos)" for a complete description. tcp-request connection set-var(<var-name>[,<cond> ...]) <expr> [ { if | unless } <condition> ] tcp-request connection set-var-fmt(<var-name>[,<cond> ...]) <fmt> [ { if | unless } <condition> ] This is used to set the contents of a variable. The variable is declared inline. "[tcp-request connection](#tcp-request%20connection)" can set variables in the "[proc](#proc)" and "sess" scopes. Please refer to "http-request set-var" and "http-request set-var-fmt" for a complete description. ``` **tcp-request connection silent-drop** [ rst-ttl <ttl> ] [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and makes the client-facing connection suddenly disappear using a system-dependent way that tries to prevent the client from being notified. Please refer to "[http-request silent-drop](#http-request%20silent-drop)" for a complete description. ``` **tcp-request connection track-sc0** <key> [table <table>] [ { if | unless } <condition> ] **tcp-request connection track-sc1** <key> [table <table>] [ { if | unless } <condition> ] **tcp-request connection track-sc2** <key> [table <table>] [ { if | unless } <condition> ] ``` This enables tracking of sticky counters from current connection. Please refer to "[http-request track-sc0](#http-request%20track-sc0)", "[http-request track-sc1](#http-request%20track-sc1)" and "http-request track-sc2" for a complete description. ``` **tcp-request connection unset-var**(<var-name>) [ { if | unless } <condition> ] ``` This is used to unset a variable. Please refer to "http-request set-var" for details about variables. ``` **tcp-request content** <action> [{if | unless} <condition>] ``` Perform an action on a new session depending on a layer 4-7 condition ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | yes | Arguments : ``` <action> defines the action to perform if the condition applies. See below. <condition> is a standard layer 4-7 ACL-based condition (see [section 7](#7)). ``` ``` A request's contents can be analyzed at an early stage of request processing called "TCP content inspection". During this stage, ACL-based rules are evaluated every time the request contents are updated, until either an "accept", a "reject" or a "switch-mode" rule matches, or the TCP request inspection delay expires with no matching rule. The first difference between these rules and "[tcp-request connection](#tcp-request%20connection)" rules is that "[tcp-request content](#tcp-request%20content)" rules can make use of contents to take a decision. Most often, these decisions will consider a protocol recognition or validity. The second difference is that content-based rules can be used in both frontends and backends. In case of HTTP keep-alive with the client, all tcp-request content rules are evaluated again, so HAProxy keeps a record of what sticky counters were assigned by a "[tcp-request connection](#tcp-request%20connection)" versus a "[tcp-request content](#tcp-request%20content)" rule, and flushes all the content-related ones after processing an HTTP request, so that they may be evaluated again by the rules being evaluated again for the next request. This is of particular importance when the rule tracks some L7 information or when it is conditioned by an L7-based ACL, since tracking may change between requests. Content-based rules are evaluated in their exact declaration order. If no rule matches or if there is no rule, the default action is to accept the contents. There is no specific limit to the number of rules which may be inserted. The first keyword is the rule's action. Several types of actions are supported: - accept - capture <sample> len <length> - do-resolve(<var>,<resolvers>,[ipv4,ipv6]) <expr> - reject - sc-inc-gpc(<idx>,<sc-id>) - sc-inc-gpc0(<sc-id>) - sc-inc-gpc1(<sc-id>) - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } - sc-set-gpt0(<sc-id>) { <int> | <expr> } - send-spoe-group <engine-name> <group-name> - set-bandwidth-limit <name> [limit <expr>] [period <expr>] - set-dst <expr> - set-dst-port <expr> - set-log-level <level> - set-mark <mark> - set-nice <nice> - set-priority-class <expr> - set-priority-offset <expr> - set-src <expr> - set-src-port <expr> - set-tos <tos> - set-var(<var-name>[,<cond> ...]) <expr> - set-var-fmt(<var-name>[,<cond> ...]) <fmt> - silent-drop - switch-mode http [ proto <name> ] - track-sc0 <key> [table <table>] - track-sc1 <key> [table <table>] - track-sc2 <key> [table <table>] - unset-var(<var-name>) - use-service <service-name> The supported actions are described below. While there is nothing mandatory about it, it is recommended to use the track-sc0 in "[tcp-request connection](#tcp-request%20connection)" rules, track-sc1 for "tcp-request content" rules in the frontend, and track-sc2 for "[tcp-request content](#tcp-request%20content)" rules in the backend, because that makes the configuration more readable and easier to troubleshoot, but this is just a guideline and all counters may be used everywhere. This directive is only available from named defaults sections, not anonymous ones. Rules defined in the defaults section are evaluated before ones in the associated proxy section. To avoid ambiguities, in this case the same defaults section cannot be used by proxies with the frontend capability and by proxies with the backend capability. It means a listen section cannot use a defaults section defining such rules. Note that the "if/unless" condition is optional. If no condition is set on the action, it is simply performed unconditionally. That can be useful for "track-sc*" actions as well as for changing the default action to a reject. Note also that it is recommended to use a "[tcp-request session](#tcp-request%20session)" rule to track information that does *not* depend on Layer 7 contents, especially for HTTP frontends. Some HTTP processing are performed at the session level and may lead to an early rejection of the requests. Thus, the tracking at the content level may be disturbed in such case. A warning is emitted during startup to prevent, as far as possible, such unreliable usage. It is perfectly possible to match layer 7 contents with "[tcp-request content](#tcp-request%20content)" rules from a TCP proxy, since HTTP-specific ACL matches are able to preliminarily parse the contents of a buffer before extracting the required data. If the buffered contents do not parse as a valid HTTP message, then the ACL does not match. The parser which is involved there is exactly the same as for all other HTTP processing, so there is no risk of parsing something differently. In an HTTP frontend or an HTTP backend, it is guaranteed that HTTP contents will always be immediately present when the rule is evaluated first because the HTTP parsing is performed in the early stages of the connection processing, at the session level. But for such proxies, using "http-request" rules is much more natural and recommended. Tracking layer7 information is also possible provided that the information are present when the rule is processed. The rule processing engine is able to wait until the inspect delay expires when the data to be tracked is not yet available. ``` Example: ``` tcp-request content use-service lua.deny if { src -f /etc/haproxy/blacklist.lst } ``` Example: ``` tcp-request content set-var(sess.my_var) src tcp-request content set-var-fmt(sess.from) %[src]:%[src_port] tcp-request content unset-var(sess.my_var2) ``` Example: ``` # Accept HTTP requests containing a Host header saying "example.com" # and reject everything else. (Only works for HTTP/1 connections) acl is_host_com hdr(Host) -i example.com tcp-request inspect-delay 30s tcp-request content accept if is_host_com tcp-request content reject # Accept HTTP requests containing a Host header saying "example.com" # and reject everything else. (works for HTTP/1 and HTTP/2 connections) acl is_host_com hdr(Host) -i example.com tcp-request inspect-delay 5s tcp-request switch-mode http if HTTP tcp-request reject # non-HTTP traffic is implicit here ... http-request reject unless is_host_com ``` Example: ``` # reject SMTP connection if client speaks first tcp-request inspect-delay 30s acl content_present req.len gt 0 tcp-request content reject if content_present # Forward HTTPS connection only if client speaks tcp-request inspect-delay 30s acl content_present req.len gt 0 tcp-request content accept if content_present tcp-request content reject ``` Example: ``` # Track the last IP(stick-table type string) from X-Forwarded-For tcp-request inspect-delay 10s tcp-request content track-sc0 hdr(x-forwarded-for,-1) # Or track the last IP(stick-table type ip|ipv6) from X-Forwarded-For tcp-request content track-sc0 req.hdr_ip(x-forwarded-for,-1) ``` Example: ``` # track request counts per "[base](#base)" (concatenation of Host+URL) tcp-request inspect-delay 10s tcp-request content track-sc0 base table req-rate ``` Example: ``` Track per-frontend and per-backend counters, block abusers at the frontend when the backend detects abuse(and marks gpc0).frontend http # Use General Purpose Counter 0 in SC0 as a global abuse counter # protecting all our sites stick-table type ip size 1m expire 5m store gpc0 tcp-request connection track-sc0 src tcp-request connection reject if { sc0_get_gpc0 gt 0 } ... use_backend http_dynamic if { path_end .php } backend http_dynamic # if a source makes too fast requests to this dynamic site (tracked # by SC1), block it globally in the frontend. stick-table type ip size 1m expire 5m store http_req_rate(10s) acl click_too_fast sc1_http_req_rate gt 10 acl mark_as_abuser sc0_inc_gpc0(http) gt 0 tcp-request content track-sc1 src tcp-request content reject if click_too_fast mark_as_abuser ``` ``` See [section 7](#7) about ACL usage. ``` **See also :** "[tcp-request connection](#tcp-request%20connection)", "[tcp-request session](#tcp-request%20session)", "[tcp-request inspect-delay](#tcp-request%20inspect-delay)", and "http-request". **tcp-request content accept** [ { if | unless } <condition> ] ``` This is used to accept the connection. No further "[tcp-request content](#tcp-request%20content)" rules are evaluated for the current section. ``` **tcp-request content capture** <sample> len <length> [ { if | unless } <condition> ] ``` This captures sample expression <sample> from the request buffer, and converts it to a string of at most <len> characters. The resulting string is stored into the next request "[capture](#capture)" slot, so it will possibly appear next to some captured HTTP headers. It will then automatically appear in the logs, and it will be possible to extract it using sample fetch rules to feed it into headers or anything. The length should be limited given that this size will be allocated for each capture during the whole session life. Please check [section 7.3](#7.3) (Fetching samples) and "[capture request header](#capture%20request%20header)" for more information. ``` **tcp-request content do-resolve**(<var>,<resolvers>,[ipv4,ipv6]) <expr> ``` This action performs a DNS resolution of the output of <expr> and stores the result in the variable <var>. Please refer to "[http-request do-resolve](#http-request%20do-resolve)" for a complete description. ``` **tcp-request content reject** [ { if | unless } <condition> ] ``` This is used to reject the connection. No further "[tcp-request content](#tcp-request%20content)" rules are evaluated. ``` **tcp-request content sc-inc-gpc**(<idx>,<sc-id>) [ { if | unless } <condition> ] **tcp-request content sc-inc-gpc0**(<sc-id>) [ { if | unless } <condition> ] **tcp-request content sc-inc-gpc1**(<sc-id>) [ { if | unless } <condition> ] ``` These actions increment the General Purppose Counters according to the sticky counter designated by <sc-id>. Please refer to "[http-request sc-inc-gpc](#http-request%20sc-inc-gpc)", "[http-request sc-inc-gpc0](#http-request%20sc-inc-gpc0)" and "[http-request sc-inc-gpc1](#http-request%20sc-inc-gpc1)" for a complete description. ``` **tcp-request content sc-set-gpt**(<idx>,<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] **tcp-request content sc-set-gpt0**(<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] ``` These actions set the 32-bit unsigned General Purpose Tags according to the sticky counter designated by <sc-id>. Please refer to "http-request sc-inc-gpt" and "http-request sc-inc-gpt0" for a complete description. ``` **tcp-request content send-spoe-group** <engine-name> <group-name> [ { if | unless } <condition> ] ``` Thaction is is used to trigger sending of a group of SPOE messages. Please refer to "[http-request send-spoe-group](#http-request%20send-spoe-group)" for a complete description. ``` **tcp-request content set-bandwidth-limit** <name> [limit <expr>] [period <expr>] [ { if | unless } <condition> ] ``` This action is used to enable the bandwidth limitation filter <name>, either on the upload or download direction depending on the filter type. Please refer to "[http-request set-bandwidth-limit](#http-request%20set-bandwidth-limit)" for a complete description. ``` **tcp-request content set-dst** <expr> [ { if | unless } <condition> ] **tcp-request content set-dst-port** <expr> [ { if | unless } <condition> ] ``` These actions are used to set the destination IP/Port address to the value of specified expression. Please refer to "[http-request set-dst](#http-request%20set-dst)" and "[http-request set-dst-port](#http-request%20set-dst-port)" for a complete description. ``` **tcp-request content set-log-level** <level> [ { if | unless } <condition> ] ``` This action is used to set the log level of the current session. Please refer to "[http-request set-log-level](#http-request%20set-log-level)". for a complete description. ``` **tcp-request content set-mark** <mark> [ { if | unless } <condition> ] ``` This action is used to set the Netfilter/IPFW MARK in all packets sent to the client to the value passed in <mark> on platforms which support it. Please refer to "[http-request set-mark](#http-request%20set-mark)" for a complete description. ``` **tcp-request content set-nice** <nice> [ { if | unless } <condition> ] ``` This sets the "[nice](#nice)" factor of the current request being processed. Please refer to "[http-request set-nice](#http-request%20set-nice)" for a complete description. ``` **tcp-request content set-priority-class** <expr> [ { if | unless } <condition> ] ``` This is used to set the queue priority class of the current request. Please refer to "[http-request set-priority-class](#http-request%20set-priority-class)" for a complete description. ``` **tcp-request content set-priority-offset** <expr> [ { if | unless } <condition> ] ``` This is used to set the queue priority timestamp offset of the current request. Please refer to "[http-request set-priority-offset](#http-request%20set-priority-offset)" for a complete description. ``` **tcp-request content set-src** <expr> [ { if | unless } <condition> ] **tcp-request content set-src-port** <expr> [ { if | unless } <condition> ] ``` These actions are used to set the source IP/Port address to the value of specified expression. Please refer to "[http-request set-src](#http-request%20set-src)" and "[http-request set-src-port](#http-request%20set-src-port)" for a complete description. ``` **tcp-request content set-tos** <tos> [ { if | unless } <condition> ] ``` This is used to set the TOS or DSCP field value of packets sent to the client to the value passed in <tos> on platforms which support this. Please refer to "[http-request set-tos](#http-request%20set-tos)" for a complete description. tcp-request content set-var(<var-name>[,<cond> ...]) <expr> [ { if | unless } <condition> ] tcp-request content set-var-fmt(<var-name>[,<cond> ...]) <fmt> [ { if | unless } <condition> ] This is used to set the contents of a variable. The variable is declared inline. Please refer to "http-request set-var" and "http-request set-var-fmt" for a complete description. ``` **tcp-request content silent-drop** [ rst-ttl <ttl> ] [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and makes the client-facing connection suddenly disappear using a system-dependent way that tries to prevent the client from being notified. Please refer to "[http-request silent-drop](#http-request%20silent-drop)" for a complete description. ``` **tcp-request content switch-mode http** [ proto <name> ] [ { if | unless } <condition> ] ``` This action is used to perform a connection upgrade. Only HTTP upgrades are supported for now. The protocol may optionally be specified. This action is only available for a proxy with the frontend capability. The connection upgrade is immediately performed, following "[tcp-request content](#tcp-request%20content)" rules are not evaluated. This upgrade method should be preferred to the implicit one consisting to rely on the backend mode. When used, it is possible to set HTTP directives in a frontend without any warning. These directives will be conditionally evaluated if the HTTP upgrade is performed. However, an HTTP backend must still be selected. It remains unsupported to route an HTTP connection (upgraded or not) to a TCP server. See [section 4](#4) about Proxies for more details on HTTP upgrades. ``` **tcp-request content track-sc0** <key> [table <table>] [ { if | unless } <condition> ] **tcp-request content track-sc1** <key> [table <table>] [ { if | unless } <condition> ] **tcp-request content track-sc2** <key> [table <table>] [ { if | unless } <condition> ] ``` This enables tracking of sticky counters from current connection. Please refer to "[http-request track-sc0](#http-request%20track-sc0)", "[http-request track-sc1](#http-request%20track-sc1)" and "http-request track-sc2" for a complete description. ``` **tcp-request content unset-var**(<var-name>) [ { if | unless } <condition> ] ``` This is used to unset a variable. Please refer to "http-request set-var" for details about variables. ``` **tcp-request content use-service** <service-name> [ { if | unless } <condition> ] ``` This action is used to executes a TCP service which will reply to the request and stop the evaluation of the rules. This service may choose to reply by sending any valid response or it may immediately close the connection without sending anything. Outside natives services, it is possible to write your own services in Lua. No further "[tcp-request content](#tcp-request%20content)" rules are evaluated. ``` **tcp-request inspect-delay** <timeout> ``` Set the maximum allowed time to wait for data during content inspection ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` People using HAProxy primarily as a TCP relay are often worried about the risk of passing any type of protocol to a server without any analysis. In order to be able to analyze the request contents, we must first withhold the data then analyze them. This statement simply enables withholding of data for at most the specified amount of time. TCP content inspection applies very early when a connection reaches a frontend, then very early when the connection is forwarded to a backend. This means that a connection may experience a first delay in the frontend and a second delay in the backend if both have tcp-request rules. Note that when performing content inspection, HAProxy will evaluate the whole rules for every new chunk which gets in, taking into account the fact that those data are partial. If no rule matches before the aforementioned delay, a last check is performed upon expiration, this time considering that the contents are definitive. If no delay is set, HAProxy will not wait at all and will immediately apply a verdict based on the available information. Obviously this is unlikely to be very useful and might even be racy, so such setups are not recommended. As soon as a rule matches, the request is released and continues as usual. If the timeout is reached and no rule matches, the default policy will be to let it pass through unaffected. For most protocols, it is enough to set it to a few seconds, as most clients send the full request immediately upon connection. Add 3 or more seconds to cover TCP retransmits but that's all. For some protocols, it may make sense to use large values, for instance to ensure that the client never talks before the server (e.g. SMTP), or to wait for a client to talk before passing data to the server (e.g. SSL). Note that the client timeout must cover at least the inspection delay, otherwise it will expire first. If the client closes the connection or if the buffer is full, the delay immediately expires since the contents will not be able to change anymore. This directive is only available from named defaults sections, not anonymous ones. Proxies inherit this value from their defaults section. ``` **See also :** "[tcp-request content accept](#tcp-request%20content%20accept)", "[tcp-request content reject](#tcp-request%20content%20reject)", "timeout client". **tcp-request session** <action> [{if | unless} <condition>] ``` Perform an action on a validated session depending on a layer 5 condition ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | yes | yes | no | Arguments : ``` <action> defines the action to perform if the condition applies. See below. <condition> is a standard layer5-only ACL-based condition (see [section 7](#7)). ``` ``` Once a session is validated, (i.e. after all handshakes have been completed), it is possible to evaluate some conditions to decide whether this session must be accepted or dropped or have its counters tracked. Those conditions cannot make use of any data contents because no buffers are allocated yet and the processing cannot wait at this stage. The main use case is to copy some early information into variables (since variables are accessible in the session), or to keep track of some information collected after the handshake, such as SSL-level elements (SNI, ciphers, client cert's CN) or information from the PROXY protocol header (e.g. track a source forwarded this way). The extracted information can thus be copied to a variable or tracked using "track-sc" rules. Of course it is also possible to decide to accept/reject as with other rulesets. Most operations performed here could also be performed in "[tcp-request content](#tcp-request%20content)" rules, except that in HTTP these rules are evaluated for each new request, and that might not always be acceptable. For example a rule might increment a counter on each evaluation. It would also be possible that a country is resolved by geolocation from the source IP address, assigned to a session-wide variable, then the source address rewritten from an HTTP header for all requests. If some contents need to be inspected in order to take the decision, the "[tcp-request content](#tcp-request%20content)" statements must be used instead. The "[tcp-request session](#tcp-request%20session)" rules are evaluated in their exact declaration order. If no rule matches or if there is no rule, the default action is to accept the incoming session. There is no specific limit to the number of rules which may be inserted. The first keyword is the rule's action. Several types of actions are supported: - accept - reject - sc-inc-gpc(<idx>,<sc-id>) - sc-inc-gpc0(<sc-id>) - sc-inc-gpc1(<sc-id>) - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } - sc-set-gpt0(<sc-id>) { <int> | <expr> } - set-dst <expr> - set-dst-port <expr> - set-mark <mark> - set-src <expr> - set-src-port <expr> - set-tos <tos> - set-var(<var-name>[,<cond> ...]) <expr> - set-var-fmt(<var-name>[,<cond> ...]) <fmt> - silent-drop - track-sc0 <key> [table <table>] - track-sc1 <key> [table <table>] - track-sc2 <key> [table <table>] - unset-var(<var-name>) The supported actions are described below. This directive is only available from named defaults sections, not anonymous ones. Rules defined in the defaults section are evaluated before ones in the associated proxy section. To avoid ambiguities, in this case the same defaults section cannot be used by proxies with the frontend capability and by proxies with the backend capability. It means a listen section cannot use a defaults section defining such rules. Note that the "if/unless" condition is optional. If no condition is set on the action, it is simply performed unconditionally. That can be useful for "track-sc*" actions as well as for changing the default action to a reject. ``` Example: ``` Track the original source address by default, or the one advertised in the PROXY protocol header for connection coming from the local proxies. The first connection-level rule enables receipt of the PROXY protocol for these ones, the second rule tracks whatever address we decide to keep after optional decoding.tcp-request connection expect-proxy layer4 if { src -f proxies.lst } tcp-request session track-sc0 src ``` Example: ``` Accept all sessions from white-listed hosts, reject too fast sessions without counting them, and track accepted sessions. This results in session rate being capped from abusive sources.tcp-request session accept if { src -f /etc/haproxy/whitelist.lst } tcp-request session reject if { src_sess_rate gt 10 } tcp-request session track-sc0 src ``` Example: ``` Accept all sessions from white-listed hosts, count all other sessions and reject too fast ones. This results in abusive ones being blocked as long as they don't slow down.tcp-request session accept if { src -f /etc/haproxy/whitelist.lst } tcp-request session track-sc0 src tcp-request session reject if { sc0_sess_rate gt 10 } ``` ``` See [section 7](#7) about ACL usage. ``` **See also :** "[tcp-request connection](#tcp-request%20connection)", "[tcp-request content](#tcp-request%20content)", "[stick-table](#stick-table)" **tcp-request session accept** [ { if | unless } <condition> ] ``` This is used to accept the connection. No further "[tcp-request session](#tcp-request%20session)" rules are evaluated. ``` **tcp-request session reject** [ { if | unless } <condition> ] ``` This is used to reject the connection. No further "[tcp-request session](#tcp-request%20session)" rules are evaluated. ``` **tcp-request session sc-inc-gpc**(<idx>,<sc-id>) [ { if | unless } <condition> ] **tcp-request session sc-inc-gpc0**(<sc-id>) [ { if | unless } <condition> ] **tcp-request session sc-inc-gpc1**(<sc-id>) [ { if | unless } <condition> ] ``` These actions increment the General Purppose Counters according to the sticky counter designated by <sc-id>. Please refer to "[http-request sc-inc-gpc](#http-request%20sc-inc-gpc)", "[http-request sc-inc-gpc0](#http-request%20sc-inc-gpc0)" and "[http-request sc-inc-gpc1](#http-request%20sc-inc-gpc1)" for a complete description. ``` **tcp-request session sc-set-gpt**(<idx>,<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] **tcp-request session sc-set-gpt0**(<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] ``` These actions set the 32-bit unsigned General Purpose Tags according to the sticky counter designated by <sc-id>. Please refer to "tcp-request connection sc-inc-gpt" and "tcp-request connection sc-inc-gpt0" for a complete description. ``` **tcp-request session set-dst** <expr> [ { if | unless } <condition> ] **tcp-request session set-dst-port** <expr> [ { if | unless } <condition> ] ``` These actions are used to set the destination IP/Port address to the value of specified expression. Please refer to "[http-request set-dst](#http-request%20set-dst)" and "[http-request set-dst-port](#http-request%20set-dst-port)" for a complete description. ``` **tcp-request session set-mark** <mark> [ { if | unless } <condition> ] ``` This action is used to set the Netfilter/IPFW MARK in all packets sent to the client to the value passed in <mark> on platforms which support it. Please refer to "[http-request set-mark](#http-request%20set-mark)" for a complete description. ``` **tcp-request session set-src** <expr> [ { if | unless } <condition> ] **tcp-request session set-src-port** <expr> [ { if | unless } <condition> ] ``` These actions are used to set the source IP/Port address to the value of specified expression. Please refer to "[http-request set-src](#http-request%20set-src)" and "[http-request set-src-port](#http-request%20set-src-port)" for a complete description. ``` **tcp-request session set-tos** <tos> [ { if | unless } <condition> ] ``` This is used to set the TOS or DSCP field value of packets sent to the client to the value passed in <tos> on platforms which support this. Please refer to "[http-request set-tos](#http-request%20set-tos)" for a complete description. tcp-request session set-var(<var-name>[,<cond> ...]) <expr> [ { if | unless } <condition> ] tcp-request session set-var-fmt(<var-name>[,<cond> ...]) <fmt> [ { if | unless } <condition> ] This is used to set the contents of a variable. The variable is declared inline. Please refer to "http-request set-var" and "http-request set-var-fmt" for a complete description. ``` **tcp-request session silent-drop** [ rst-ttl <ttl> ] [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and makes the client-facing connection suddenly disappear using a system-dependent way that tries to prevent the client from being notified. Please refer to "[http-request silent-drop](#http-request%20silent-drop)" for a complete description. ``` **tcp-request session track-sc0** <key> [table <table>] [ { if | unless } <condition> ] **tcp-request session track-sc1** <key> [table <table>] [ { if | unless } <condition> ] **tcp-request session track-sc2** <key> [table <table>] [ { if | unless } <condition> ] ``` This enables tracking of sticky counters from current connection. Please refer to "[http-request track-sc0](#http-request%20track-sc0)", "[http-request track-sc1](#http-request%20track-sc1)" and "http-request track-sc2" for a complete description. ``` **tcp-request session unset-var**(<var-name>) [ { if | unless } <condition> ] ``` This is used to unset a variable. Please refer to "http-request set-var" for details about variables. ``` **tcp-response content** <action> [{if | unless} <condition>] ``` Perform an action on a session response depending on a layer 4-7 condition ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | no | yes | yes | Arguments : ``` <action> defines the action to perform if the condition applies. See below. <condition> is a standard layer 4-7 ACL-based condition (see [section 7](#7)). ``` ``` Response contents can be analyzed at an early stage of response processing called "TCP content inspection". During this stage, ACL-based rules are evaluated every time the response contents are updated, until either an "accept", "close" or a "reject" rule matches, or a TCP response inspection delay is set and expires with no matching rule. Most often, these decisions will consider a protocol recognition or validity. Content-based rules are evaluated in their exact declaration order. If no rule matches or if there is no rule, the default action is to accept the contents. There is no specific limit to the number of rules which may be inserted. The first keyword is the rule's action. Several types of actions are supported: - accept - close - reject - sc-inc-gpc(<idx>,<sc-id>) - sc-inc-gpc0(<sc-id>) - sc-inc-gpc1(<sc-id>) - sc-set-gpt(<idx>,<sc-id>) { <int> | <expr> } - sc-set-gpt0(<sc-id>) { <int> | <expr> } - send-spoe-group <engine-name> <group-name> - set-bandwidth-limit <name> [limit <expr>] [period <expr>] - set-log-level <level> - set-mark <mark> - set-nice <nice> - set-tos <tos> - set-var(<var-name>[,<cond> ...]) <expr> - set-var-fmt(<var-name>[,<cond> ...]) <fmt> - silent-drop - unset-var(<var-name>) The supported actions are described below. This directive is only available from named defaults sections, not anonymous ones. Rules defined in the defaults section are evaluated before ones in the associated proxy section. To avoid ambiguities, in this case the same defaults section cannot be used by proxies with the frontend capability and by proxies with the backend capability. It means a listen section cannot use a defaults section defining such rules. Note that the "if/unless" condition is optional. If no condition is set on the action, it is simply performed unconditionally. That can be useful for for changing the default action to a reject. Several types of actions are supported : It is perfectly possible to match layer 7 contents with "tcp-response content" rules, but then it is important to ensure that a full response has been buffered, otherwise no contents will match. In order to achieve this, the best solution involves detecting the HTTP protocol during the inspection period. See [section 7](#7) about ACL usage. ``` **See also :** "[tcp-request content](#tcp-request%20content)", "[tcp-response inspect-delay](#tcp-response%20inspect-delay)" **tcp-response content accept** [ { if | unless } <condition> ] ``` This is used to accept the response. No further "[tcp-response content](#tcp-response%20content)" rules are evaluated. ``` **tcp-response content close** [ { if | unless } <condition> ] ``` This is used to immediately closes the connection with the server. No further "[tcp-response content](#tcp-response%20content)" rules are evaluated. The main purpose of this action is to force a connection to be finished between a client and a server after an exchange when the application protocol expects some long time outs to elapse first. The goal is to eliminate idle connections which take significant resources on servers with certain protocols. ``` **tcp-response content reject** [ { if | unless } <condition> ] ``` This is used to reject the response. No further "[tcp-response content](#tcp-response%20content)" rules are evaluated. ``` **tcp-response content sc-inc-gpc**(<idx>,<sc-id>) [ { if | unless } <condition> ] **tcp-response content sc-inc-gpc0**(<sc-id>) [ { if | unless } <condition> ] **tcp-response content sc-inc-gpc1**(<sc-id>) [ { if | unless } <condition> ] ``` These actions increment the General Purppose Counters according to the sticky counter designated by <sc-id>. Please refer to "[http-request sc-inc-gpc](#http-request%20sc-inc-gpc)", "[http-request sc-inc-gpc0](#http-request%20sc-inc-gpc0)" and "[http-request sc-inc-gpc1](#http-request%20sc-inc-gpc1)" for a complete description. ``` **tcp-response content sc-set-gpt**(<idx>,<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] **tcp-resposne content sc-set-gpt0**(<sc-id>) { <int> | <expr> } [ { if | unless } <condition> ] ``` These actions set the 32-bit unsigned General Purpose Tags according to the sticky counter designated by <sc-id>. Please refer to "http-request sc-inc-gpt" and "http-request sc-inc-gpt0" for a complete description. ``` **tcp-response content send-spoe-group** <engine-name> <group-name> [ { if | unless } <condition> ] ``` Thaction is is used to trigger sending of a group of SPOE messages. Please refer to "[http-request send-spoe-group](#http-request%20send-spoe-group)" for a complete description. ``` **tcp-response content set-bandwidth-limit** <name> [limit <expr>] [period <expr>] [ { if | unless } <condition> ] ``` This action is used to enable the bandwidth limitation filter <name>, either on the upload or download direction depending on the filter type. Please refer to "[http-request set-bandwidth-limit](#http-request%20set-bandwidth-limit)" for a complete description. ``` **tcp-response content set-log-level** <level> [ { if | unless } <condition> ] ``` This action is used to set the log level of the current session. Please refer to "[http-request set-log-level](#http-request%20set-log-level)". for a complete description. ``` **tcp-response content set-mark** <mark> [ { if | unless } <condition> ] ``` This action is used to set the Netfilter/IPFW MARK in all packets sent to the client to the value passed in <mark> on platforms which support it. Please refer to "[http-request set-mark](#http-request%20set-mark)" for a complete description. ``` **tcp-response content set-nice** <nice> [ { if | unless } <condition> ] ``` This sets the "[nice](#nice)" factor of the current request being processed. Please refer to "[http-request set-nice](#http-request%20set-nice)" for a complete description. ``` **tcp-response content set-tos** <tos> [ { if | unless } <condition> ] ``` This is used to set the TOS or DSCP field value of packets sent to the client to the value passed in <tos> on platforms which support this. Please refer to "[http-request set-tos](#http-request%20set-tos)" for a complete description. tcp-response content set-var(<var-name>[,<cond> ...]) <expr> [ { if | unless } <condition> ] tcp-response content set-var-fmt(<var-name>[,<cond> ...]) <fmt> [ { if | unless } <condition> ] This is used to set the contents of a variable. The variable is declared inline. Please refer to "http-request set-var" and "http-request set-var-fmt" for a complete description. ``` **tcp-response content silent-drop** [ rst-ttl <ttl> ] [ { if | unless } <condition> ] ``` This stops the evaluation of the rules and makes the client-facing connection suddenly disappear using a system-dependent way that tries to prevent the client from being notified. Please refer to "[http-request silent-drop](#http-request%20silent-drop)" for a complete description. ``` **tcp-response content unset-var**(<var-name>) [ { if | unless } <condition> ] ``` This is used to unset a variable. Please refer to "http-request set-var" for details about variables. ``` **tcp-response inspect-delay** <timeout> ``` Set the maximum allowed time to wait for a response during content inspection ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes(!) | no | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` This directive is only available from named defaults sections, not anonymous ones. Proxies inherit this value from their defaults section. ``` **See also :** "[tcp-response content](#tcp-response%20content)", "[tcp-request inspect-delay](#tcp-request%20inspect-delay)". **timeout check** <timeout> ``` Set additional check timeout, but only after a connection has been already established. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments: ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` If set, HAProxy uses min("timeout connect", "[inter](#inter)") as a connect timeout for check and "[timeout check](#timeout%20check)" as an additional read timeout. The "min" is used so that people running with *very* long "timeout connect" (e.g. those who needed this due to the queue or tarpit) do not slow down their checks. (Please also note that there is no valid reason to have such long connect timeouts, because "[timeout queue](#timeout%20queue)" and "[timeout tarpit](#timeout%20tarpit)" can always be used to avoid that). If "[timeout check](#timeout%20check)" is not set HAProxy uses "[inter](#inter)" for complete check timeout (connect + read) exactly like all <1.3.15 version. In most cases check request is much simpler and faster to handle than normal requests and people may want to kick out laggy servers so this timeout should be smaller than "timeout server". This parameter is specific to backends, but can be specified once for all in "defaults" sections. This is in fact one of the easiest solutions not to forget about it. This directive is only available from named defaults sections, not anonymous ones. Proxies inherit this value from their defaults section. ``` **See also:** "timeout connect", "[timeout queue](#timeout%20queue)", "timeout server", "[timeout tarpit](#timeout%20tarpit)". **timeout client** <timeout> ``` Set the maximum inactivity time on the client side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` The inactivity timeout applies when the client is expected to acknowledge or send data. In HTTP mode, this timeout is particularly important to consider during the first phase, when the client sends the request, and during the response while it is reading data sent by the server. That said, for the first phase, it is preferable to set the "[timeout http-request](#timeout%20http-request)" to better protect HAProxy from Slowloris like attacks. The value is specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as specified at the top of this document. In TCP mode (and to a lesser extent, in HTTP mode), it is highly recommended that the client timeout remains equal to the server timeout in order to avoid complex situations to debug. It is a good practice to cover one or several TCP packet losses by specifying timeouts that are slightly above multiples of 3 seconds (e.g. 4 or 5 seconds). If some long-lived sessions are mixed with short-lived sessions (e.g. WebSocket and HTTP), it's worth considering "[timeout tunnel](#timeout%20tunnel)", which overrides "timeout client" and "timeout server" for tunnels, as well as "[timeout client-fin](#timeout%20client-fin)" for half-closed connections. This parameter is specific to frontends, but can be specified once for all in "defaults" sections. This is in fact one of the easiest solutions not to forget about it. An unspecified timeout results in an infinite timeout, which is not recommended. Such a usage is accepted and works but reports a warning during startup because it may result in accumulation of expired sessions in the system if the system's timeouts are not configured either. ``` **See also :** "timeout server", "[timeout tunnel](#timeout%20tunnel)", "[timeout http-request](#timeout%20http-request)". **timeout client-fin** <timeout> ``` Set the inactivity timeout on the client side for half-closed connections. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` The inactivity timeout applies when the client is expected to acknowledge or send data while one direction is already shut down. This timeout is different from "timeout client" in that it only applies to connections which are closed in one direction. This is particularly useful to avoid keeping connections in FIN_WAIT state for too long when clients do not disconnect cleanly. This problem is particularly common long connections such as RDP or WebSocket. Note that this timeout can override "[timeout tunnel](#timeout%20tunnel)" when a connection shuts down in one direction. It is applied to idle HTTP/2 connections once a GOAWAY frame was sent, often indicating an expectation that the connection quickly ends. This parameter is specific to frontends, but can be specified once for all in "defaults" sections. By default it is not set, so half-closed connections will use the other timeouts (timeout.client or timeout.tunnel). ``` **See also :** "timeout client", "[timeout server-fin](#timeout%20server-fin)", and "[timeout tunnel](#timeout%20tunnel)". **timeout connect** <timeout> ``` Set the maximum time to wait for a connection attempt to a server to succeed. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` If the server is located on the same LAN as HAProxy, the connection should be immediate (less than a few milliseconds). Anyway, it is a good practice to cover one or several TCP packet losses by specifying timeouts that are slightly above multiples of 3 seconds (e.g. 4 or 5 seconds). By default, the connect timeout also presets both queue and tarpit timeouts to the same value if these have not been specified. This parameter is specific to backends, but can be specified once for all in "defaults" sections. This is in fact one of the easiest solutions not to forget about it. An unspecified timeout results in an infinite timeout, which is not recommended. Such a usage is accepted and works but reports a warning during startup because it may result in accumulation of failed sessions in the system if the system's timeouts are not configured either. ``` **See also:** "[timeout check](#timeout%20check)", "[timeout queue](#timeout%20queue)", "timeout server", "[timeout tarpit](#timeout%20tarpit)". **timeout http-keep-alive** <timeout> ``` Set the maximum allowed time to wait for a new HTTP request to appear ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` By default, the time to wait for a new request in case of keep-alive is set by "[timeout http-request](#timeout%20http-request)". However this is not always convenient because some people want very short keep-alive timeouts in order to release connections faster, and others prefer to have larger ones but still have short timeouts once the request has started to present itself. The "[http-keep-alive](#option%20http-keep-alive)" timeout covers these needs. It will define how long to wait for a new HTTP request to start coming after a response was sent. Once the first byte of request has been seen, the "http-request" timeout is used to wait for the complete request to come. Note that empty lines prior to a new request do not refresh the timeout and are not counted as a new request. There is also another difference between the two timeouts : when a connection expires during timeout http-keep-alive, no error is returned, the connection just closes. If the connection expires in "http-request" while waiting for a connection to complete, a HTTP 408 error is returned. In general it is optimal to set this value to a few tens to hundreds of milliseconds, to allow users to fetch all objects of a page at once but without waiting for further clicks. Also, if set to a very small value (e.g. 1 millisecond) it will probably only accept pipelined requests but not the non-pipelined ones. It may be a nice trade-off for very large sites running with tens to hundreds of thousands of clients. If this parameter is not set, the "http-request" timeout applies, and if both are not set, "timeout client" still applies at the lower level. It should be set in the frontend to take effect, unless the frontend is in TCP mode, in which case the HTTP backend's timeout will be used. ``` **See also :** "[timeout http-request](#timeout%20http-request)", "timeout client". **timeout http-request** <timeout> ``` Set the maximum allowed time to wait for a complete HTTP request ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` In order to offer DoS protection, it may be required to lower the maximum accepted time to receive a complete HTTP request without affecting the client timeout. This helps protecting against established connections on which nothing is sent. The client timeout cannot offer a good protection against this abuse because it is an inactivity timeout, which means that if the attacker sends one character every now and then, the timeout will not trigger. With the HTTP request timeout, no matter what speed the client types, the request will be aborted if it does not complete in time. When the timeout expires, an HTTP 408 response is sent to the client to inform it about the problem, and the connection is closed. The logs will report termination codes "cR". Some recent browsers are having problems with this standard, well-documented behavior, so it might be needed to hide the 408 code using "[option http-ignore-probes](#option%20http-ignore-probes)" or "errorfile 408 /dev/null". See more details in the explanations of the "cR" termination code in [section 8.5](#8.5). By default, this timeout only applies to the header part of the request, and not to any data. As soon as the empty line is received, this timeout is not used anymore. When combined with "[option http-buffer-request](#option%20http-buffer-request)", this timeout also applies to the body of the request.. It is used again on keep-alive connections to wait for a second request if "[timeout http-keep-alive](#timeout%20http-keep-alive)" is not set. Generally it is enough to set it to a few seconds, as most clients send the full request immediately upon connection. Add 3 or more seconds to cover TCP retransmits but that's all. Setting it to very low values (e.g. 50 ms) will generally work on local networks as long as there are no packet losses. This will prevent people from sending bare HTTP requests using telnet. If this parameter is not set, the client timeout still applies between each chunk of the incoming request. It should be set in the frontend to take effect, unless the frontend is in TCP mode, in which case the HTTP backend's timeout will be used. ``` **See also :** "errorfile", "[http-ignore-probes](#option%20http-ignore-probes)", "[timeout http-keep-alive](#timeout%20http-keep-alive)", and "timeout client", "[option http-buffer-request](#option%20http-buffer-request)". **timeout queue** <timeout> ``` Set the maximum time to wait in the queue for a connection slot to be free ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` When a server's maxconn is reached, connections are left pending in a queue which may be server-specific or global to the backend. In order not to wait indefinitely, a timeout is applied to requests pending in the queue. If the timeout is reached, it is considered that the request will almost never be served, so it is dropped and a 503 error is returned to the client. The "[timeout queue](#timeout%20queue)" statement allows to fix the maximum time for a request to be left pending in a queue. If unspecified, the same value as the backend's connection timeout ("timeout connect") is used, for backwards compatibility with older versions with no "[timeout queue](#timeout%20queue)" parameter. ``` **See also :** "timeout connect". **timeout server** <timeout> ``` Set the maximum inactivity time on the server side. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` The inactivity timeout applies when the server is expected to acknowledge or send data. In HTTP mode, this timeout is particularly important to consider during the first phase of the server's response, when it has to send the headers, as it directly represents the server's processing time for the request. To find out what value to put there, it's often good to start with what would be considered as unacceptable response times, then check the logs to observe the response time distribution, and adjust the value accordingly. The value is specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as specified at the top of this document. In TCP mode (and to a lesser extent, in HTTP mode), it is highly recommended that the client timeout remains equal to the server timeout in order to avoid complex situations to debug. Whatever the expected server response times, it is a good practice to cover at least one or several TCP packet losses by specifying timeouts that are slightly above multiples of 3 seconds (e.g. 4 or 5 seconds minimum). If some long-lived sessions are mixed with short-lived sessions (e.g. WebSocket and HTTP), it's worth considering "[timeout tunnel](#timeout%20tunnel)", which overrides "timeout client" and "timeout server" for tunnels. This parameter is specific to backends, but can be specified once for all in "defaults" sections. This is in fact one of the easiest solutions not to forget about it. An unspecified timeout results in an infinite timeout, which is not recommended. Such a usage is accepted and works but reports a warning during startup because it may result in accumulation of expired sessions in the system if the system's timeouts are not configured either. ``` **See also :** "timeout client" and "[timeout tunnel](#timeout%20tunnel)". **timeout server-fin** <timeout> ``` Set the inactivity timeout on the server side for half-closed connections. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` The inactivity timeout applies when the server is expected to acknowledge or send data while one direction is already shut down. This timeout is different from "timeout server" in that it only applies to connections which are closed in one direction. This is particularly useful to avoid keeping connections in FIN_WAIT state for too long when a remote server does not disconnect cleanly. This problem is particularly common long connections such as RDP or WebSocket. Note that this timeout can override "[timeout tunnel](#timeout%20tunnel)" when a connection shuts down in one direction. This setting was provided for completeness, but in most situations, it should not be needed. This parameter is specific to backends, but can be specified once for all in "defaults" sections. By default it is not set, so half-closed connections will use the other timeouts (timeout.server or timeout.tunnel). ``` **See also :** "[timeout client-fin](#timeout%20client-fin)", "timeout server", and "[timeout tunnel](#timeout%20tunnel)". **timeout tarpit** <timeout> ``` Set the duration for which tarpitted connections will be maintained ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | yes | Arguments : ``` <timeout> is the tarpit duration specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` When a connection is tarpitted using "[http-request tarpit](#http-request%20tarpit)", it is maintained open with no activity for a certain amount of time, then closed. "timeout tarpit" defines how long it will be maintained open. The value is specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as specified at the top of this document. If unspecified, the same value as the backend's connection timeout ("timeout connect") is used, for backwards compatibility with older versions with no "[timeout tarpit](#timeout%20tarpit)" parameter. ``` **See also :** "timeout connect". **timeout tunnel** <timeout> ``` Set the maximum inactivity time on the client and server side for tunnels. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : ``` <timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document. ``` ``` The tunnel timeout applies when a bidirectional connection is established between a client and a server, and the connection remains inactive in both directions. This timeout supersedes both the client and server timeouts once the connection becomes a tunnel. In TCP, this timeout is used as soon as no analyzer remains attached to either connection (e.g. tcp content rules are accepted). In HTTP, this timeout is used when a connection is upgraded (e.g. when switching to the WebSocket protocol, or forwarding a CONNECT request to a proxy), or after the first response when no keepalive/close option is specified. Since this timeout is usually used in conjunction with long-lived connections, it usually is a good idea to also set "[timeout client-fin](#timeout%20client-fin)" to handle the situation where a client suddenly disappears from the net and does not acknowledge a close, or sends a shutdown and does not acknowledge pending data anymore. This can happen in lossy networks where firewalls are present, and is detected by the presence of large amounts of sessions in a FIN_WAIT state. The value is specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as specified at the top of this document. Whatever the expected normal idle time, it is a good practice to cover at least one or several TCP packet losses by specifying timeouts that are slightly above multiples of 3 seconds (e.g. 4 or 5 seconds minimum). This parameter is specific to backends, but can be specified once for all in "defaults" sections. This is in fact one of the easiest solutions not to forget about it. ``` Example : ``` defaults http option http-server-close timeout connect 5s timeout client 30s timeout client-fin 30s timeout server 30s timeout tunnel 1h # timeout to use with WebSocket and CONNECT ``` **See also :** "timeout client", "[timeout client-fin](#timeout%20client-fin)", "timeout server". **transparent** (deprecated) ``` Enable client-side transparent proxying ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | no | yes | yes | Arguments : none ``` This keyword was introduced in order to provide layer 7 persistence to layer 3 load balancers. The idea is to use the OS's ability to redirect an incoming connection for a remote address to a local process (here HAProxy), and let this process know what address was initially requested. When this option is used, sessions without cookies will be forwarded to the original destination IP address of the incoming request (which should match that of another equipment), while requests with cookies will still be forwarded to the appropriate server. The "[transparent](#option%20transparent)" keyword is deprecated, use "[option transparent](#option%20transparent)" instead. Note that contrary to a common belief, this option does NOT make HAProxy present the client's IP to the server when establishing the connection. ``` **See also:** "[option transparent](#option%20transparent)" **unique-id-format** <string> ``` Generate a unique ID for each request. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <string> is a log-format string. ``` ``` This keyword creates a ID for each request using the custom log format. A unique ID is useful to trace a request passing through many components of a complex infrastructure. The newly created ID may also be logged using the %ID tag the log-format string. The format should be composed from elements that are guaranteed to be unique when combined together. For instance, if multiple HAProxy instances are involved, it might be important to include the node name. It is often needed to log the incoming connection's source and destination addresses and ports. Note that since multiple requests may be performed over the same connection, including a request counter may help differentiate them. Similarly, a timestamp may protect against a rollover of the counter. Logging the process ID will avoid collisions after a service restart. It is recommended to use hexadecimal notation for many fields since it makes them more compact and saves space in logs. ``` Example: ``` unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid will generate: 7F000001:8296_7F00001E:1F90_4F7B0A69_0003:790A ``` **See also:** "[unique-id-header](#unique-id-header)" **unique-id-header** <name> ``` Add a unique ID header in the HTTP request. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | yes | yes | yes | no | Arguments : ``` <name> is the name of the header. ``` ``` Add a unique-id header in the HTTP request sent to the server, using the unique-id-format. It can't work if the unique-id-format doesn't exist. ``` Example: ``` unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid unique-id-header X-Unique-ID will generate: X-Unique-ID: 7F000001:8296_7F00001E:1F90_4F7B0A69_0003:790A See also: "[unique-id-format](#unique-id-format)" ``` **use\_backend** <backend> [{if | unless} <condition>] ``` Switch to a specific backend if/unless an ACL-based condition is matched. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | yes | yes | no | Arguments : ``` <backend> is the name of a valid backend or "listen" section, or a "[log-format](#log-format)" string resolving to a backend name. <condition> is a condition composed of ACLs, as described in [section 7](#7). If it is omitted, the rule is unconditionally applied. ``` ``` When doing content-switching, connections arrive on a frontend and are then dispatched to various backends depending on a number of conditions. The relation between the conditions and the backends is described with the "[use\_backend](#use_backend)" keyword. While it is normally used with HTTP processing, it can also be used in pure TCP, either without content using stateless ACLs (e.g. source address validation) or combined with a "[tcp-request](#tcp-request)" rule to wait for some payload. There may be as many "[use\_backend](#use_backend)" rules as desired. All of these rules are evaluated in their declaration order, and the first one which matches will assign the backend. In the first form, the backend will be used if the condition is met. In the second form, the backend will be used if the condition is not met. If no condition is valid, the backend defined with "[default\_backend](#default_backend)" will be used. If no default backend is defined, either the servers in the same section are used (in case of a "listen" section) or, in case of a frontend, no server is used and a 503 service unavailable response is returned. Note that it is possible to switch from a TCP frontend to an HTTP backend. In this case, either the frontend has already checked that the protocol is HTTP, and backend processing will immediately follow, or the backend will wait for a complete HTTP request to get in. This feature is useful when a frontend must decode several protocols on a unique port, one of them being HTTP. When <backend> is a simple name, it is resolved at configuration time, and an error is reported if the specified backend does not exist. If <backend> is a log-format string instead, no check may be done at configuration time, so the backend name is resolved dynamically at run time. If the resulting backend name does not correspond to any valid backend, no other rule is evaluated, and the default_backend directive is applied instead. Note that when using dynamic backend names, it is highly recommended to use a prefix that no other backend uses in order to ensure that an unauthorized backend cannot be forced from the request. It is worth mentioning that "[use\_backend](#use_backend)" rules with an explicit name are used to detect the association between frontends and backends to compute the backend's "[fullconn](#fullconn)" setting. This cannot be done for dynamic names. ``` **See also:** "[default\_backend](#default_backend)", "[tcp-request](#tcp-request)", "[fullconn](#fullconn)", "[log-format](#log-format)", and [section 7](#7) about ACLs. **use-fcgi-app** <name> ``` Defines the FastCGI application to use for the backend. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments : ``` <name> is the name of the FastCGI application to use. ``` ``` See [section 10.1](#10.1) about FastCGI application setup for details. ``` **use-server** <server> if <condition> **use-server** <server> unless <condition> ``` Only use a specific server if/unless an ACL-based condition is matched. ``` May be used in sections : | defaults | frontend | listen | backend | | --- | --- | --- | --- | | no | no | yes | yes | Arguments : ``` <server> is the name of a valid server in the same backend section or a "[log-format](#log-format)" string resolving to a server name. <condition> is a condition composed of ACLs, as described in [section 7](#7). ``` ``` By default, connections which arrive to a backend are load-balanced across the available servers according to the configured algorithm, unless a persistence mechanism such as a cookie is used and found in the request. Sometimes it is desirable to forward a particular request to a specific server without having to declare a dedicated backend for this server. This can be achieved using the "[use-server](#use-server)" rules. These rules are evaluated after the "[redirect](#redirect)" rules and before evaluating cookies, and they have precedence on them. There may be as many "[use-server](#use-server)" rules as desired. All of these rules are evaluated in their declaration order, and the first one which matches will assign the server. If a rule designates a server which is down, and "[option persist](#option%20persist)" is not used and no force-persist rule was validated, it is ignored and evaluation goes on with the next rules until one matches. In the first form, the server will be used if the condition is met. In the second form, the server will be used if the condition is not met. If no condition is valid, the processing continues and the server will be assigned according to other persistence mechanisms. Note that even if a rule is matched, cookie processing is still performed but does not assign the server. This allows prefixed cookies to have their prefix stripped. The "[use-server](#use-server)" statement works both in HTTP and TCP mode. This makes it suitable for use with content-based inspection. For instance, a server could be selected in a farm according to the TLS SNI field when using protocols with implicit TLS (also see "[req.ssl\_sni](#req.ssl_sni)"). And if these servers have their weight set to zero, they will not be used for other traffic. ``` Example : ``` # intercept incoming TLS requests based on the SNI field use-server www if { req.ssl_sni -i www.example.com } server www 192.168.0.1:443 weight 0 use-server mail if { req.ssl_sni -i mail.example.com } server mail 192.168.0.1:465 weight 0 use-server imap if { req.ssl_sni -i imap.example.com } server imap 192.168.0.1:993 weight 0 # all the rest is forwarded to this server server default 192.168.0.2:443 check ``` ``` When <server> is a simple name, it is checked against existing servers in the configuration and an error is reported if the specified server does not exist. If it is a log-format, no check is performed when parsing the configuration, and if we can't resolve a valid server name at runtime but the use-server rule was conditioned by an ACL returning true, no other use-server rule is applied and we fall back to load balancing. ``` **See also:** "[use\_backend](#use_backend)", [section 5](#5) about server and [section 7](#7) about ACLs. 5. Bind and server options --------------------------- ``` The "bind", "server" and "default-server" keywords support a number of settings depending on some build options and on the system HAProxy was built on. These settings generally each consist in one word sometimes followed by a value, written on the same line as the "bind" or "server" line. All these options are described in this section. ``` ### 5.1. Bind options ``` The "bind" keyword supports a certain number of settings which are all passed as arguments on the same line. The order in which those arguments appear makes no importance, provided that they appear after the bind address. All of these parameters are optional. Some of them consist in a single words (booleans), while other ones expect a value after them. In this case, the value must be provided immediately after the setting name. The currently supported settings are the following ones. ``` **accept-netscaler-cip** <magic number> ``` Enforces the use of the NetScaler Client IP insertion protocol over any connection accepted by any of the TCP sockets declared on the same line. The NetScaler Client IP insertion protocol dictates the layer 3/4 addresses of the incoming connection to be used everywhere an address is used, with the only exception of "[tcp-request connection](#tcp-request%20connection)" rules which will only see the real connection address. Logs will reflect the addresses indicated in the protocol, unless it is violated, in which case the real address will still be used. This keyword combined with support from external components can be used as an efficient and reliable alternative to the X-Forwarded-For mechanism which is not always reliable and not even always usable. See also "[tcp-request connection expect-netscaler-cip](#tcp-request%20connection%20expect-netscaler-cip)" for a finer-grained setting of which client is allowed to use the protocol. ``` **accept-proxy** ``` Enforces the use of the PROXY protocol over any connection accepted by any of the sockets declared on the same line. Versions 1 and 2 of the PROXY protocol are supported and correctly detected. The PROXY protocol dictates the layer 3/4 addresses of the incoming connection to be used everywhere an address is used, with the only exception of "[tcp-request connection](#tcp-request%20connection)" rules which will only see the real connection address. Logs will reflect the addresses indicated in the protocol, unless it is violated, in which case the real address will still be used. This keyword combined with support from external components can be used as an efficient and reliable alternative to the X-Forwarded-For mechanism which is not always reliable and not even always usable. See also "[tcp-request connection expect-proxy](#tcp-request%20connection%20expect-proxy)" for a finer-grained setting of which client is allowed to use the protocol. ``` **allow-0rtt** ``` Allow receiving early data when using TLSv1.3. This is disabled by default, due to security considerations. Because it is vulnerable to replay attacks, you should only allow if for requests that are safe to replay, i.e. requests that are idempotent. You can use the "wait-for-handshake" action for any request that wouldn't be safe with early data. ``` **alpn** <protocols> ``` This enables the TLS ALPN extension and advertises the specified protocol list as supported on top of ALPN. The protocol list consists in a comma- delimited list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). This requires that the SSL library is built with support for TLS extensions enabled (check with haproxy -vv). The ALPN extension replaces the initial NPN extension. ALPN is required to enable HTTP/2 on an HTTP frontend. Versions of OpenSSL prior to 1.0.2 didn't support ALPN and only supposed the now obsolete NPN extension. At the time of writing this, most browsers still support both ALPN and NPN for HTTP/2 so a fallback to NPN may still work for a while. But ALPN must be used whenever possible. If both HTTP/2 and HTTP/1.1 are expected to be supported, both versions can be advertised, in order of preference, like below : bind :443 ssl crt pub.pem alpn h2,http/1.1 QUIC supports only h3 and hq-interop as ALPN. h3 is for HTTP/3 and hq-interop is used for http/0.9 and QUIC interop runner (see https://interop.seemann.io). ``` **backlog** <backlog> ``` Sets the socket's backlog to this value. If unspecified or 0, the frontend's backlog is used instead, which generally defaults to the maxconn value. ``` **curves** <curves> ``` This setting is only available when support for OpenSSL was built in. It sets the string describing the list of elliptic curves algorithms ("curve suite") that are negotiated during the SSL/TLS handshake with ECDHE. The format of the string is a colon-delimited list of curve name. ``` Example: ``` "X25519:P-256" (without quote)When "[curves](#curves)" is set, "[ecdhe](#ecdhe)" parameter is ignored. ``` **ecdhe** <named curve> ``` This setting is only available when support for OpenSSL was built in. It sets the named curve (RFC 4492) used to generate ECDH ephemeral keys. By default, used named curve is prime256v1. ``` **ca-file** <cafile> ``` This setting is only available when support for OpenSSL was built in. It designates a PEM file from which to load CA certificates used to verify client's certificate. It is possible to load a directory containing multiple CAs, in this case HAProxy will try to load every ".pem", ".crt", ".cer", and .crl" available in the directory, files starting with a dot are ignored. Warning: The "@system-ca" parameter could be used in place of the cafile in order to use the trusted CAs of your system, like its done with the server directive. But you mustn't use it unless you know what you are doing. Configuring it this way basically mean that the bind will accept any client certificate generated from one of the CA present on your system, which is extremely insecure. ``` **ca-ignore-err** [all|<errorID>,...] ``` This setting is only available when support for OpenSSL was built in. Sets a comma separated list of errorIDs to ignore during verify at depth > 0. It could be a numerical ID, or the constant name (X509_V_ERR) which is available in the OpenSSL documentation: https://www.openssl.org/docs/manmaster/man3/X509_STORE_CTX_get_error.html#ERROR-CODES It is recommended to use the constant name as the numerical value can change in new version of OpenSSL. If set to 'all', all errors are ignored. SSL handshake is not aborted if an error is ignored. ``` **ca-sign-file** <cafile> ``` This setting is only available when support for OpenSSL was built in. It designates a PEM file containing both the CA certificate and the CA private key used to create and sign server's certificates. This is a mandatory setting when the dynamic generation of certificates is enabled. See 'generate-certificates' for details. ``` **ca-sign-pass** <passphrase> ``` This setting is only available when support for OpenSSL was built in. It is the CA private key passphrase. This setting is optional and used only when the dynamic generation of certificates is enabled. See 'generate-certificates' for details. ``` **ca-verify-file** <cafile> ``` This setting designates a PEM file from which to load CA certificates used to verify client's certificate. It designates CA certificates which must not be included in CA names sent in server hello message. Typically, "ca-file" must be defined with intermediate certificates, and "[ca-verify-file](#ca-verify-file)" with certificates to ending the chain, like root CA. ``` **ciphers** <ciphers> ``` This setting is only available when support for OpenSSL was built in. It sets the string describing the list of cipher algorithms ("cipher suite") that are negotiated during the SSL/TLS handshake up to TLSv1.2. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages. For background information and recommendations see e.g. (https://wiki.mozilla.org/Security/Server_Side_TLS) and (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3 cipher configuration, please check the "ciphersuites" keyword. ``` **ciphersuites** <ciphersuites> ``` This setting is only available when support for OpenSSL was built in and OpenSSL 1.1.1 or later was used to build HAProxy. It sets the string describing the list of cipher algorithms ("cipher suite") that are negotiated during the TLSv1.3 handshake. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages under the "ciphersuites" section. For cipher configuration for TLSv1.2 and earlier, please check the "ciphers" keyword. ``` **crl-file** <crlfile> ``` This setting is only available when support for OpenSSL was built in. It designates a PEM file from which to load certificate revocation list used to verify client's certificate. You need to provide a certificate revocation list for every certificate of your certificate authority chain. ``` **crt** <cert> ``` This setting is only available when support for OpenSSL was built in. It designates a PEM file containing both the required certificates and any associated private keys. This file can be built by concatenating multiple PEM files into one (e.g. cat cert.pem key.pem > combined.pem). If your CA requires an intermediate certificate, this can also be concatenated into this file. Intermediate certificate can also be shared in a directory via "[issuers-chain-path](#issuers-chain-path)" directive. If the file does not contain a private key, HAProxy will try to load the key at the same path suffixed by a ".key". If the OpenSSL used supports Diffie-Hellman, parameters present in this file are loaded. If a directory name is used instead of a PEM file, then all files found in that directory will be loaded in alphabetic order unless their name ends with '.key', '.issuer', '.ocsp' or '.sctl' (reserved extensions). Files starting with a dot are also ignored. This directive may be specified multiple times in order to load certificates from multiple files or directories. The certificates will be presented to clients who provide a valid TLS Server Name Indication field matching one of their CN or alt subjects. Wildcards are supported, where a wildcard character '*' is used instead of the first hostname component (e.g. *.example.org matches www.example.org but not www.sub.example.org). If no SNI is provided by the client or if the SSL library does not support TLS extensions, or if the client provides an SNI hostname which does not match any certificate, then the first loaded certificate will be presented. This means that when loading certificates from a directory, it is highly recommended to load the default one first as a file or to ensure that it will always be the first one in the directory. Note that the same cert may be loaded multiple times without side effects. Some CAs (such as GoDaddy) offer a drop down list of server types that do not include HAProxy when obtaining a certificate. If this happens be sure to choose a web server that the CA believes requires an intermediate CA (for GoDaddy, selection Apache Tomcat will get the correct bundle, but many others, e.g. nginx, result in a wrong bundle that will not work for some clients). For each PEM file, HAProxy checks for the presence of file at the same path suffixed by ".ocsp". If such file is found, support for the TLS Certificate Status Request extension (also known as "OCSP stapling") is automatically enabled. The content of this file is optional. If not empty, it must contain a valid OCSP Response in DER format. In order to be valid an OCSP Response must comply with the following rules: it has to indicate a good status, it has to be a single response for the certificate of the PEM file, and it has to be valid at the moment of addition. If these rules are not respected the OCSP Response is ignored and a warning is emitted. In order to identify which certificate an OCSP Response applies to, the issuer's certificate is necessary. If the issuer's certificate is not found in the PEM file, it will be loaded from a file at the same path as the PEM file suffixed by ".issuer" if it exists otherwise it will fail with an error. For each PEM file, HAProxy also checks for the presence of file at the same path suffixed by ".sctl". If such file is found, support for Certificate Transparency (RFC6962) TLS extension is enabled. The file must contain a valid Signed Certificate Timestamp List, as described in RFC. File is parsed to check basic syntax, but no signatures are verified. There are cases where it is desirable to support multiple key types, e.g. RSA and ECDSA in the cipher suites offered to the clients. This allows clients that support EC certificates to be able to use EC ciphers, while simultaneously supporting older, RSA only clients. To achieve this, OpenSSL 1.1.1 is required, you can configure this behavior by providing one crt entry per certificate type, or by configuring a "cert bundle" like it was required before HAProxy 1.8. See "[ssl-load-extra-files](#ssl-load-extra-files)". ``` **crt-ignore-err** <errors> ``` This setting is only available when support for OpenSSL was built in. Sets a comma separated list of errorIDs to ignore during verify at depth == 0. It could be a numerical ID, or the constant name (X509_V_ERR) which is available in the OpenSSL documentation: https://www.openssl.org/docs/manmaster/man3/X509_STORE_CTX_get_error.html#ERROR-CODES It is recommended to use the constant name as the numerical value can change in new version of OpenSSL. If set to 'all', all errors are ignored. SSL handshake is not aborted if an error is ignored. ``` **crt-list** <file> ``` This setting is only available when support for OpenSSL was built in. It designates a list of PEM file with an optional ssl configuration and a SNI filter per certificate, with the following format for each line : <crtfile> [\[<sslbindconf> ...\]] [[!]<snifilter> ...] sslbindconf supports "allow-0rtt", "alpn", "ca-file", "[ca-verify-file](#ca-verify-file)", "ciphers", "ciphersuites", "crl-file", "[curves](#curves)", "[ecdhe](#ecdhe)", "[no-ca-names](#no-ca-names)", "npn", "verify" configuration. With BoringSSL and Openssl >= 1.1.1 "ssl-min-ver" and "ssl-max-ver" are also supported. It overrides the configuration set in bind line for the certificate. Wildcards are supported in the SNI filter. Negative filter are also supported, useful in combination with a wildcard filter to exclude a particular SNI, or after the first certificate to exclude a pattern from its CN or Subject Alt Name (SAN). The certificates will be presented to clients who provide a valid TLS Server Name Indication field matching one of the SNI filters. If no SNI filter is specified, the CN and SAN are used. This directive may be specified multiple times. See the "crt" option for more information. The default certificate is still needed to meet OpenSSL expectations. If it is not used, the 'strict-sni' option may be used. Multi-cert bundling (see "[ssl-load-extra-files](#ssl-load-extra-files)") is supported with crt-list, as long as only the base name is given in the crt-list. SNI filter will do the same work on all bundled certificates. Empty lines as well as lines beginning with a hash ('#') will be ignored. The first declared certificate of a bind line is used as the default certificate, either from crt or crt-list option, which HAProxy should use in the TLS handshake if no other certificate matches. This certificate will also be used if the provided SNI matches its CN or SAN, even if a matching SNI filter is found on any crt-list. The SNI filter !* can be used after the first declared certificate to not include its CN and SAN in the SNI tree, so it will never match except if no other certificate matches. This way the first declared certificate act as a fallback. crt-list file example: cert1.pem !* # comment cert2.pem [alpn h2,http/1.1] certW.pem *.domain.tld !secure.domain.tld certS.pem [curves X25519:P-256 ciphers ECDHE-ECDSA-AES256-GCM-SHA384] secure.domain.tld ``` **defer-accept** ``` Is an optional keyword which is supported only on certain Linux kernels. It states that a connection will only be accepted once some data arrive on it, or at worst after the first retransmit. This should be used only on protocols for which the client talks first (e.g. HTTP). It can slightly improve performance by ensuring that most of the request is already available when the connection is accepted. On the other hand, it will not be able to detect connections which don't talk. It is important to note that this option is broken in all kernels up to 2.6.31, as the connection is never accepted until the client talks. This can cause issues with front firewalls which would see an established connection while the proxy will only see it in SYN_RECV. This option is only supported on TCPv4/TCPv6 sockets and ignored by other ones. ``` **expose-fd listeners** ``` This option is only usable with the stats socket. It gives your stats socket the capability to pass listeners FD to another HAProxy process. In master-worker mode, this is not required anymore, the listeners will be passed using the internal socketpairs between the master and the workers. See also "-x" in the management guide. ``` **force-sslv3** ``` This option enforces use of SSLv3 only on SSL connections instantiated from this listener. SSLv3 is generally less expensive than the TLS counterparts for high connection rates. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". See also "ssl-min-ver" and "ssl-max-ver". ``` **force-tlsv10** ``` This option enforces use of TLSv1.0 only on SSL connections instantiated from this listener. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". See also "ssl-min-ver" and "ssl-max-ver". ``` **force-tlsv11** ``` This option enforces use of TLSv1.1 only on SSL connections instantiated from this listener. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". See also "ssl-min-ver" and "ssl-max-ver". ``` **force-tlsv12** ``` This option enforces use of TLSv1.2 only on SSL connections instantiated from this listener. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". See also "ssl-min-ver" and "ssl-max-ver". ``` **force-tlsv13** ``` This option enforces use of TLSv1.3 only on SSL connections instantiated from this listener. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". See also "ssl-min-ver" and "ssl-max-ver". ``` **generate-certificates** ``` This setting is only available when support for OpenSSL was built in. It enables the dynamic SSL certificates generation. A CA certificate and its private key are necessary (see 'ca-sign-file'). When HAProxy is configured as a transparent forward proxy, SSL requests generate errors because of a common name mismatch on the certificate presented to the client. With this option enabled, HAProxy will try to forge a certificate using the SNI hostname indicated by the client. This is done only if no certificate matches the SNI hostname (see 'crt-list'). If an error occurs, the default certificate is used, else the 'strict-sni' option is set. It can also be used when HAProxy is configured as a reverse proxy to ease the deployment of an architecture with many backends. Creating a SSL certificate is an expensive operation, so a LRU cache is used to store forged certificates (see 'tune.ssl.ssl-ctx-cache-size'). It increases the HAProxy's memory footprint to reduce latency when the same certificate is used many times. ``` **gid** <gid> ``` Sets the group of the UNIX sockets to the designated system gid. It can also be set by default in the global section's "[unix-bind](#unix-bind)" statement. Note that some platforms simply ignore this. This setting is equivalent to the "group" setting except that the group ID is used instead of its name. This setting is ignored by non UNIX sockets. ``` **group** <group> ``` Sets the group of the UNIX sockets to the designated system group. It can also be set by default in the global section's "[unix-bind](#unix-bind)" statement. Note that some platforms simply ignore this. This setting is equivalent to the "gid" setting except that the group name is used instead of its gid. This setting is ignored by non UNIX sockets. ``` **id** <id> ``` Fixes the socket ID. By default, socket IDs are automatically assigned, but sometimes it is more convenient to fix them to ease monitoring. This value must be strictly positive and unique within the listener/frontend. This option can only be used when defining only a single socket. ``` **interface** <interface> ``` Restricts the socket to a specific interface. When specified, only packets received from that particular interface are processed by the socket. This is currently only supported on Linux. The interface must be a primary system interface, not an aliased interface. It is also possible to bind multiple frontends to the same address if they are bound to different interfaces. Note that binding to a network interface requires root privileges. This parameter is only compatible with TCPv4/TCPv6 sockets. When specified, return traffic uses the same interface as inbound traffic, and its associated routing table, even if there are explicit routes through different interfaces configured. This can prove useful to address asymmetric routing issues when the same client IP addresses need to be able to reach frontends hosted on different interfaces. ``` **level** <level> ``` This setting is used with the stats sockets only to restrict the nature of the commands that can be issued on the socket. It is ignored by other sockets. <level> can be one of : - "user" is the least privileged level; only non-sensitive stats can be read, and no change is allowed. It would make sense on systems where it is not easy to restrict access to the socket. - "operator" is the default level and fits most common uses. All data can be read, and only non-sensitive changes are permitted (e.g. clear max counters). - "admin" should be used with care, as everything is permitted (e.g. clear all counters). ``` **severity-output** <format> ``` This setting is used with the stats sockets only to configure severity level output prepended to informational feedback messages. Severity level of messages can range between 0 and 7, conforming to syslog rfc5424. Valid and successful socket commands requesting data (i.e. "show map", "get acl foo" etc.) will never have a severity level prepended. It is ignored by other sockets. <format> can be one of : - "none" (default) no severity level is prepended to feedback messages. - "number" severity level is prepended as a number. - "string" severity level is prepended as a string following the rfc5424 convention. ``` **maxconn** <maxconn> ``` Limits the sockets to this number of concurrent connections. Extraneous connections will remain in the system's backlog until a connection is released. If unspecified, the limit will be the same as the frontend's maxconn. Note that in case of port ranges or multiple addresses, the same value will be applied to each socket. This setting enables different limitations on expensive sockets, for instance SSL entries which may easily eat all memory. ``` **mode** <mode> ``` Sets the octal mode used to define access permissions on the UNIX socket. It can also be set by default in the global section's "[unix-bind](#unix-bind)" statement. Note that some platforms simply ignore this. This setting is ignored by non UNIX sockets. ``` **mss** <maxseg> ``` Sets the TCP Maximum Segment Size (MSS) value to be advertised on incoming connections. This can be used to force a lower MSS for certain specific ports, for instance for connections passing through a VPN. Note that this relies on a kernel feature which is theoretically supported under Linux but was buggy in all versions prior to 2.6.28. It may or may not work on other operating systems. It may also not change the advertised value but change the effective size of outgoing segments. The commonly advertised value for TCPv4 over Ethernet networks is 1460 = 1500(MTU) - 40(IP+TCP). If this value is positive, it will be used as the advertised MSS. If it is negative, it will indicate by how much to reduce the incoming connection's advertised MSS for outgoing segments. This parameter is only compatible with TCP v4/v6 sockets. ``` **name** <name> ``` Sets an optional name for these sockets, which will be reported on the stats page. ``` **namespace** <name> ``` On Linux, it is possible to specify which network namespace a socket will belong to. This directive makes it possible to explicitly bind a listener to a namespace different from the default one. Please refer to your operating system's documentation to find more details about network namespaces. ``` **nice** <nice> ``` Sets the 'niceness' of connections initiated from the socket. Value must be in the range -1024..1024 inclusive, and defaults to zero. Positive values means that such connections are more friendly to others and easily offer their place in the scheduler. On the opposite, negative values mean that connections want to run with a higher priority than others. The difference only happens under high loads when the system is close to saturation. Negative values are appropriate for low-latency or administration services, and high values are generally recommended for CPU intensive tasks such as SSL processing or bulk transfers which are less sensible to latency. For example, it may make sense to use a positive value for an SMTP socket and a negative one for an RDP socket. ``` **no-ca-names** ``` This setting is only available when support for OpenSSL was built in. It prevents from send CA names in server hello message when ca-file is used. Use "[ca-verify-file](#ca-verify-file)" instead of "ca-file" with "[no-ca-names](#no-ca-names)". ``` **no-sslv3** ``` This setting is only available when support for OpenSSL was built in. It disables support for SSLv3 on any sockets instantiated from the listener when SSL is supported. Note that SSLv2 is forced disabled in the code and cannot be enabled using any configuration option. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. ``` **no-tls-tickets** ``` This setting is only available when support for OpenSSL was built in. It disables the stateless session resumption (RFC 5077 TLS Ticket extension) and force to use stateful session resumption. Stateless session resumption is more expensive in CPU usage. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". The TLS ticket mechanism is only used up to TLS 1.2. Forward Secrecy is compromised with TLS tickets, unless ticket keys are periodically rotated (via reload or by using "[tls-ticket-keys](#tls-ticket-keys)"). ``` **no-tlsv10** ``` This setting is only available when support for OpenSSL was built in. It disables support for TLSv1.0 on any sockets instantiated from the listener when SSL is supported. Note that SSLv2 is forced disabled in the code and cannot be enabled using any configuration option. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. ``` **no-tlsv11** ``` This setting is only available when support for OpenSSL was built in. It disables support for TLSv1.1 on any sockets instantiated from the listener when SSL is supported. Note that SSLv2 is forced disabled in the code and cannot be enabled using any configuration option. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. ``` **no-tlsv12** ``` This setting is only available when support for OpenSSL was built in. It disables support for TLSv1.2 on any sockets instantiated from the listener when SSL is supported. Note that SSLv2 is forced disabled in the code and cannot be enabled using any configuration option. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. ``` **no-tlsv13** ``` This setting is only available when support for OpenSSL was built in. It disables support for TLSv1.3 on any sockets instantiated from the listener when SSL is supported. Note that SSLv2 is forced disabled in the code and cannot be enabled using any configuration option. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. ``` **npn** <protocols> ``` This enables the NPN TLS extension and advertises the specified protocol list as supported on top of NPN. The protocol list consists in a comma-delimited list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). This requires that the SSL library is built with support for TLS extensions enabled (check with haproxy -vv). Note that the NPN extension has been replaced with the ALPN extension (see the "alpn" keyword), though this one is only available starting with OpenSSL 1.0.2. If HTTP/2 is desired on an older version of OpenSSL, NPN might still be used as most clients still support it at the time of writing this. It is possible to enable both NPN and ALPN though it probably doesn't make any sense out of testing. ``` **prefer-client-ciphers** ``` Use the client's preference when selecting the cipher suite, by default the server's preference is enforced. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". Note that with OpenSSL >= 1.1.1 ChaCha20-Poly1305 is reprioritized anyway (without setting this option), if a ChaCha20-Poly1305 cipher is at the top of the client cipher list. ``` **proto** <name> ``` Forces the multiplexer's protocol to use for the incoming connections. It must be compatible with the mode of the frontend (TCP or HTTP). It must also be usable on the frontend side. The list of available protocols is reported in haproxy -vv. The protocols properties are reported : the mode (TCP/HTTP), the side (FE/BE), the mux name and its flags. Some protocols are subject to the head-of-line blocking on server side (flag=HOL_RISK). Finally some protocols don't support upgrades (flag=NO_UPG). The HTX compatibility is also reported (flag=HTX). Here are the protocols that may be used as argument to a "proto" directive on a bind line : h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG Idea behind this option is to bypass the selection of the best multiplexer's protocol for all connections instantiated from this listening socket. For instance, it is possible to force the http/2 on clear TCP by specifying "proto h2" on the bind line. ``` **quic-cc-algo** [ cubic | newreno ] ``` Warning: QUIC support in HAProxy is currently experimental. Configuration may This is a QUIC specific setting to select the congestion control algorithm for any connection attempts to the configured QUIC listeners. They are similar to those used by TCP. Default value: cubic ``` **quic-force-retry** ``` Warning: QUIC support in HAProxy is currently experimental. Configuration may change without deprecation in the future. This is a QUIC specific setting which forces the use of the QUIC Retry feature for all the connection attempts to the configured QUIC listeners. It consists in veryfying the peers are able to receive packets at the transport address they used to initiate a new connection, sending them a Retry packet which contains a token. This token must be sent back to the Retry packet sender, this latter being the only one to be able to validate the token. Note that QUIC Retry will always be used even if a Retry threshold was set (see "[tune.quic.retry-threshold](#tune.quic.retry-threshold)" setting). This setting requires the cluster secret to be set or else an error will be reported on startup (see "[cluster-secret](#cluster-secret)"). See https://www.rfc-editor.org/rfc/rfc9000.html#section-8.1.2 for more information about QUIC retry. ``` **shards** <number> | by-thread ``` In multi-threaded mode, on operating systems supporting multiple listeners on the same IP:port, this will automatically create this number of multiple identical listeners for the same line, all bound to a fair share of the number of the threads attached to this listener. This can sometimes be useful when using very large thread counts where the in-kernel locking on a single socket starts to cause a significant overhead. In this case the incoming traffic is distributed over multiple sockets and the contention is reduced. Note that doing this can easily increase the CPU usage by making more threads work a little bit. If the number of shards is higher than the number of available threads, it will automatically be trimmed to the number of threads (i.e. one shard per thread). The special "by-thread" value also creates as many shards as there are threads on the "bind" line. Since the system will evenly distribute the incoming traffic between all these shards, it is important that this number is an integral divisor of the number of threads. ``` **ssl** ``` This setting is only available when support for OpenSSL was built in. It enables SSL deciphering on connections instantiated from this listener. A certificate is necessary (see "crt" above). All contents in the buffers will appear in clear text, so that ACLs and HTTP processing will only have access to deciphered contents. SSLv3 is disabled per default, use "ssl-min-ver SSLv3" to enable it. ``` **ssl-max-ver** [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] ``` This option enforces use of <version> or lower on SSL connections instantiated from this listener. Using this setting without "ssl-min-ver" can be ambiguous because the default ssl-min-ver value could change in future HAProxy versions. This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". See also "ssl-min-ver". ``` **ssl-min-ver** [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] ``` This option enforces use of <version> or upper on SSL connections instantiated from this listener. The default value is "TLSv1.2". This option is also available on global statement "[ssl-default-bind-options](#ssl-default-bind-options)". See also "ssl-max-ver". ``` **strict-sni** ``` This setting is only available when support for OpenSSL was built in. The SSL/TLS negotiation is allow only if the client provided an SNI which match a certificate. The default certificate is not used. See the "crt" option for more information. ``` **tcp-ut** <delay> ``` Sets the TCP User Timeout for all incoming connections instantiated from this listening socket. This option is available on Linux since version 2.6.37. It allows HAProxy to configure a timeout for sockets which contain data not receiving an acknowledgment for the configured delay. This is especially useful on long-lived connections experiencing long idle periods such as remote terminals or database connection pools, where the client and server timeouts must remain high to allow a long period of idle, but where it is important to detect that the client has disappeared in order to release all resources associated with its connection (and the server's session). The argument is a delay expressed in milliseconds by default. This only works for regular TCP connections, and is ignored for other protocols. ``` **tfo** ``` Is an optional keyword which is supported only on Linux kernels >= 3.7. It enables TCP Fast Open on the listening socket, which means that clients which support this feature will be able to send a request and receive a response during the 3-way handshake starting from second connection, thus saving one round-trip after the first connection. This only makes sense with protocols that use high connection rates and where each round trip matters. This can possibly cause issues with many firewalls which do not accept data on SYN packets, so this option should only be enabled once well tested. This option is only supported on TCPv4/TCPv6 sockets and ignored by other ones. You may need to build HAProxy with USE_TFO=1 if your libc doesn't define TCP_FASTOPEN. ``` **thread** [<thread-group>/]<thread-set> ``` This restricts the list of threads on which this listener is allowed to run. It does not enforce any of them but eliminates those which do not match. It limits the threads allowed to process incoming connections for this listener. There are two numbering schemes. By default, thread numbers are absolute in the process, comprised between 1 and the value specified in global.nbthread. When thread groups are enabled, the number of a single desired thread group (starting at 1) may be specified before a slash ('/') before the thread range. In this case, the thread numbers in the range are relative to the thread group instead, and start at 1 for each thread group. Absolute and relative thread numbers may be used interchangeably but they must not be mixed on a single "bind" line, as those not set will be resolved at the end of the parsing. For the unlikely case where several ranges are needed, this directive may be repeated. It is not permitted to use different thread groups even when using multiple directives. The <thread-set> specification must use the format: all | odd | even | number[-[number]] Ranges can be partially defined. The higher bound can be omitted. In such a case, it is replaced by the corresponding maximum value. The main purpose is to have multiple bind lines sharing the same IP:port but not the same thread in a listener, so that the system can distribute the incoming connections into multiple queues, bypassing haproxy's internal queue load balancing. Currently Linux 3.9 and above is known for supporting this. ``` **tls-ticket-keys** <keyfile> ``` Sets the TLS ticket keys file to load the keys from. The keys need to be 48 or 80 bytes long, depending if aes128 or aes256 is used, encoded with base64 with one line per key (ex. openssl rand 80 | openssl base64 -A | xargs echo). The first key determines the key length used for next keys: you can't mix aes128 and aes256 keys. Number of keys is specified by the TLS_TICKETS_NO build option (default 3) and at least as many keys need to be present in the file. Last TLS_TICKETS_NO keys will be used for decryption and the penultimate one for encryption. This enables easy key rotation by just appending new key to the file and reloading the process. Keys must be periodically rotated (ex. every 12h) or Perfect Forward Secrecy is compromised. It is also a good idea to keep the keys off any permanent storage such as hard drives (hint: use tmpfs and don't swap those files). Lifetime hint can be changed using tune.ssl.timeout. ``` **transparent** ``` Is an optional keyword which is supported only on certain Linux kernels. It indicates that the addresses will be bound even if they do not belong to the local machine, and that packets targeting any of these addresses will be intercepted just as if the addresses were locally configured. This normally requires that IP forwarding is enabled. Caution! do not use this with the default address '*', as it would redirect any traffic for the specified port. This keyword is available only when HAProxy is built with USE_LINUX_TPROXY=1. This parameter is only compatible with TCPv4 and TCPv6 sockets, depending on kernel version. Some distribution kernels include backports of the feature, so check for support with your vendor. ``` **v4v6** ``` Is an optional keyword which is supported only on most recent systems including Linux kernels >= 2.4.21. It is used to bind a socket to both IPv4 and IPv6 when it uses the default address. Doing so is sometimes necessary on systems which bind to IPv6 only by default. It has no effect on non-IPv6 sockets, and is overridden by the "[v6only](#v6only)" option. ``` **v6only** ``` Is an optional keyword which is supported only on most recent systems including Linux kernels >= 2.4.21. It is used to bind a socket to IPv6 only when it uses the default address. Doing so is sometimes preferred to doing it system-wide as it is per-listener. It has no effect on non-IPv6 sockets and has precedence over the "[v4v6](#v4v6)" option. ``` **uid** <uid> ``` Sets the owner of the UNIX sockets to the designated system uid. It can also be set by default in the global section's "[unix-bind](#unix-bind)" statement. Note that some platforms simply ignore this. This setting is equivalent to the "user" setting except that the user numeric ID is used instead of its name. This setting is ignored by non UNIX sockets. ``` **user** <user> ``` Sets the owner of the UNIX sockets to the designated system user. It can also be set by default in the global section's "[unix-bind](#unix-bind)" statement. Note that some platforms simply ignore this. This setting is equivalent to the "uid" setting except that the user name is used instead of its uid. This setting is ignored by non UNIX sockets. ``` **verify** [none|optional|required] ``` This setting is only available when support for OpenSSL was built in. If set to 'none', client certificate is not requested. This is the default. In other cases, a client certificate is requested. If the client does not provide a certificate after the request and if 'verify' is set to 'required', then the handshake is aborted, while it would have succeeded if set to 'optional'. The certificate provided by the client is always verified using CAs from 'ca-file' and optional CRLs from 'crl-file'. On verify failure the handshake is aborted, regardless of the 'verify' option, unless the error code exactly matches one of those listed with 'ca-ignore-err' or 'crt-ignore-err'. ``` ### 5.2. Server and default-server options ``` The "server" and "default-server" keywords support a certain number of settings which are all passed as arguments on the server line. The order in which those arguments appear does not count, and they are all optional. Some of those settings are single words (booleans) while others expect one or several values after them. In this case, the values must immediately follow the setting name. Except default-server, all those settings must be specified after the server's address if they are used: server <name> <address>[:port] [settings ...] default-server [settings ...] Note that all these settings are supported both by "server" and "default-server" keywords, except "id" which is only supported by "server". The currently supported settings are the following ones. ``` **addr** <ipv4|ipv6> ``` Using the "[addr](#addr)" parameter, it becomes possible to use a different IP address to send health-checks or to probe the agent-check. On some servers, it may be desirable to dedicate an IP address to specific component able to perform complex tests which are more suitable to health-checks than the application. This parameter is ignored if the "[check](#check)" parameter is not set. See also the "[port](#port)" parameter. ``` **agent-check** ``` Enable an auxiliary agent check which is run independently of a regular health check. An agent health check is performed by making a TCP connection to the port set by the "[agent-port](#agent-port)" parameter and reading an ASCII string terminated by the first '\r' or '\n' met. The string is made of a series of words delimited by spaces, tabs or commas in any order, each consisting of : - An ASCII representation of a positive integer percentage, e.g. "75%". Values in this format will set the weight proportional to the initial weight of a server as configured when HAProxy starts. Note that a zero weight is reported on the stats page as "DRAIN" since it has the same effect on the server (it's removed from the LB farm). - The string "maxconn:" followed by an integer (no space between). Values in this format will set the maxconn of a server. The maximum number of connections advertised needs to be multiplied by the number of load balancers and different backends that use this health check to get the total number of connections the server might receive. Example: maxconn:30 - The word "ready". This will turn the server's administrative state to the READY mode, thus canceling any DRAIN or MAINT state - The word "drain". This will turn the server's administrative state to the DRAIN mode, thus it will not accept any new connections other than those that are accepted via persistence. - The word "maint". This will turn the server's administrative state to the MAINT mode, thus it will not accept any new connections at all, and health checks will be stopped. - The words "down", "fail", or "stopped", optionally followed by a description string after a sharp ('#'). All of these mark the server's operating state as DOWN, but since the word itself is reported on the stats page, the difference allows an administrator to know if the situation was expected or not : the service may intentionally be stopped, may appear up but fail some validity tests, or may be seen as down (e.g. missing process, or port not responding). - The word "up" sets back the server's operating state as UP if health checks also report that the service is accessible. Parameters which are not advertised by the agent are not changed. For example, an agent might be designed to monitor CPU usage and only report a relative weight and never interact with the operating status. Similarly, an agent could be designed as an end-user interface with 3 radio buttons allowing an administrator to change only the administrative state. However, it is important to consider that only the agent may revert its own actions, so if a server is set to DRAIN mode or to DOWN state using the agent, the agent must implement the other equivalent actions to bring the service into operations again. Failure to connect to the agent is not considered an error as connectivity is tested by the regular health check which is enabled by the "[check](#check)" parameter. Warning though, it is not a good idea to stop an agent after it reports "down", since only an agent reporting "up" will be able to turn the server up again. Note that the CLI on the Unix stats socket is also able to force an agent's result in order to work around a bogus agent if needed. Requires the "[agent-port](#agent-port)" parameter to be set. See also the "[agent-inter](#agent-inter)" and "[no-agent-check](#no-agent-check)" parameters. ``` **agent-send** <string> ``` If this option is specified, HAProxy will send the given string (verbatim) to the agent server upon connection. You could, for example, encode the backend name into this string, which would enable your agent to send different responses based on the backend. Make sure to include a '\n' if you want to terminate your request with a newline. ``` **agent-inter** <delay> ``` The "[agent-inter](#agent-inter)" parameter sets the interval between two agent checks to <delay> milliseconds. If left unspecified, the delay defaults to 2000 ms. Just as with every other time-based parameter, it may be entered in any other explicit unit among { us, ms, s, m, h, d }. The "[agent-inter](#agent-inter)" parameter also serves as a timeout for agent checks "[timeout check](#timeout%20check)" is not set. In order to reduce "resonance" effects when multiple servers are hosted on the same hardware, the agent and health checks of all servers are started with a small time offset between them. It is also possible to add some random noise in the agent and health checks interval using the global "[spread-checks](#spread-checks)" keyword. This makes sense for instance when a lot of backends use the same servers. See also the "[agent-check](#agent-check)" and "[agent-port](#agent-port)" parameters. ``` **agent-addr** <addr> ``` The "[agent-addr](#agent-addr)" parameter sets address for agent check. You can offload agent-check to another target, so you can make single place managing status and weights of servers defined in HAProxy in case you can't make self-aware and self-managing services. You can specify both IP or hostname, it will be resolved. ``` **agent-port** <port> ``` The "[agent-port](#agent-port)" parameter sets the TCP port used for agent checks. See also the "[agent-check](#agent-check)" and "[agent-inter](#agent-inter)" parameters. ``` **allow-0rtt** ``` Allow sending early data to the server when using TLS 1.3. Note that early data will be sent only if the client used early data, or if the backend uses "[retry-on](#retry-on)" with the "0rtt-rejected" keyword. ``` **alpn** <protocols> ``` This enables the TLS ALPN extension and advertises the specified protocol list as supported on top of ALPN. The protocol list consists in a comma- delimited list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). This requires that the SSL library is built with support for TLS extensions enabled (check with haproxy -vv). The ALPN extension replaces the initial NPN extension. ALPN is required to connect to HTTP/2 servers. Versions of OpenSSL prior to 1.0.2 didn't support ALPN and only supposed the now obsolete NPN extension. If both HTTP/2 and HTTP/1.1 are expected to be supported, both versions can be advertised, in order of preference, like below : server 127.0.0.1:443 ssl crt pub.pem alpn h2,http/1.1 See also "[ws](#ws)" to use an alternative ALPN for websocket streams. ``` **backup** ``` When "[backup](#backup)" is present on a server line, the server is only used in load balancing when all other non-backup servers are unavailable. Requests coming with a persistence cookie referencing the server will always be served though. By default, only the first operational backup server is used, unless the "[allbackups](#option%20allbackups)" option is set in the backend. See also the "[no-backup](#no-backup)" and "[allbackups](#option%20allbackups)" options. ``` **ca-file** <cafile> ``` This setting is only available when support for OpenSSL was built in. It designates a PEM file from which to load CA certificates used to verify server's certificate. It is possible to load a directory containing multiple CAs, in this case HAProxy will try to load every ".pem", ".crt", ".cer", and .crl" available in the directory, files starting with a dot are ignored. In order to use the trusted CAs of your system, the "@system-ca" parameter could be used in place of the cafile. The location of this directory could be overwritten by setting the SSL_CERT_DIR environment variable. ``` **check** ``` This option enables health checks on a server: - when not set, no health checking is performed, and the server is always considered available. - when set and no other check method is configured, the server is considered available when a connection can be established at the highest configured transport layer. This means TCP by default, or SSL/TLS when "ssl" or "[check-ssl](#check-ssl)" are set, both possibly combined with connection prefixes such as a PROXY protocol header when "[send-proxy](#send-proxy)" or "[check-send-proxy](#check-send-proxy)" are set. This behavior is slightly different for dynamic servers, read the following paragraphs for more details. - when set and an application-level health check is defined, the application-level exchanges are performed on top of the configured transport layer and the server is considered available if all of the exchanges succeed. By default, health checks are performed on the same address and port as configured on the server, using the same encapsulation parameters (SSL/TLS, proxy-protocol header, etc... ). It is possible to change the destination address using "[addr](#addr)" and the port using "[port](#port)". When done, it is assumed the server isn't checked on the service port, and configured encapsulation parameters are not reused. One must explicitly set "[check-send-proxy](#check-send-proxy)" to send connection headers, "[check-ssl](#check-ssl)" to use SSL/TLS. Note that the implicit configuration of ssl and PROXY protocol is not performed for dynamic servers. In this case, it is required to explicitly use "[check-ssl](#check-ssl)" and "[check-send-proxy](#check-send-proxy)" when wanted, even if the check port is not overridden. When "[sni](#sni)" or "alpn" are set on the server line, their value is not used for health checks and one must use "[check-sni](#check-sni)" or "[check-alpn](#check-alpn)". The default source address for health check traffic is the same as the one defined in the backend. It can be changed with the "source" keyword. The interval between checks can be set using the "[inter](#inter)" keyword, and the "[rise](#rise)" and "[fall](#fall)" keywords can be used to define how many successful or failed health checks are required to flag a server available or not available. Optional application-level health checks can be configured with "option httpchk", "[option mysql-check](#option%20mysql-check)" "[option smtpchk](#option%20smtpchk)", "[option pgsql-check](#option%20pgsql-check)", "[option ldap-check](#option%20ldap-check)", or "[option redis-check](#option%20redis-check)". ``` Example: ``` # simple tcp check backend foo server s1 192.168.0.1:80 check # this does a tcp connect + tls handshake backend foo server s1 192.168.0.1:443 ssl check # simple tcp check is enough for check success backend foo option tcp-check tcp-check connect server s1 192.168.0.1:443 ssl check ``` **check-send-proxy** ``` This option forces emission of a PROXY protocol line with outgoing health checks, regardless of whether the server uses send-proxy or not for the normal traffic. By default, the PROXY protocol is enabled for health checks if it is already enabled for normal traffic and if no "[port](#port)" nor "[addr](#addr)" directive is present. However, if such a directive is present, the "[check-send-proxy](#check-send-proxy)" option needs to be used to force the use of the protocol. See also the "[send-proxy](#send-proxy)" option for more information. ``` **check-alpn** <protocols> ``` Defines which protocols to advertise with ALPN. The protocol list consists in a comma-delimited list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). If it is not set, the server ALPN is used. ``` **check-proto** <name> ``` Forces the multiplexer's protocol to use for the server's health-check connections. It must be compatible with the health-check type (TCP or HTTP). It must also be usable on the backend side. The list of available protocols is reported in haproxy -vv. The protocols properties are reported : the mode (TCP/HTTP), the side (FE/BE), the mux name and its flags. Some protocols are subject to the head-of-line blocking on server side (flag=HOL_RISK). Finally some protocols don't support upgrades (flag=NO_UPG). The HTX compatibility is also reported (flag=HTX). Here are the protocols that may be used as argument to a "[check-proto](#check-proto)" directive on a server line: h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG Idea behind this option is to bypass the selection of the best multiplexer's protocol for health-check connections established to this server. If not defined, the server one will be used, if set. ``` **check-sni** <sni> ``` This option allows you to specify the SNI to be used when doing health checks over SSL. It is only possible to use a string to set <sni>. If you want to set a SNI for proxied traffic, see "[sni](#sni)". ``` **check-ssl** ``` This option forces encryption of all health checks over SSL, regardless of whether the server uses SSL or not for the normal traffic. This is generally used when an explicit "[port](#port)" or "[addr](#addr)" directive is specified and SSL health checks are not inherited. It is important to understand that this option inserts an SSL transport layer below the checks, so that a simple TCP connect check becomes an SSL connect, which replaces the old ssl-hello-chk. The most common use is to send HTTPS checks by combining "[httpchk](#option%20httpchk)" with SSL checks. All SSL settings are common to health checks and traffic (e.g. ciphers). See the "ssl" option for more information and "[no-check-ssl](#no-check-ssl)" to disable this option. ``` **check-via-socks4** ``` This option enables outgoing health checks using upstream socks4 proxy. By default, the health checks won't go through socks tunnel even it was enabled for normal traffic. ``` **ciphers** <ciphers> ``` This setting is only available when support for OpenSSL was built in. This option sets the string describing the list of cipher algorithms that is negotiated during the SSL/TLS handshake with the server. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages. For background information and recommendations see e.g. (https://wiki.mozilla.org/Security/Server_Side_TLS) and (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3 cipher configuration, please check the "ciphersuites" keyword. ``` **ciphersuites** <ciphersuites> ``` This setting is only available when support for OpenSSL was built in and OpenSSL 1.1.1 or later was used to build HAProxy. This option sets the string describing the list of cipher algorithms that is negotiated during the TLS 1.3 handshake with the server. The format of the string is defined in "man 1 ciphers" from OpenSSL man pages under the "ciphersuites" section. For cipher configuration for TLSv1.2 and earlier, please check the "ciphers" keyword. ``` **cookie** <value> ``` The "cookie" parameter sets the cookie value assigned to the server to <value>. This value will be checked in incoming requests, and the first operational server possessing the same value will be selected. In return, in cookie insertion or rewrite modes, this value will be assigned to the cookie sent to the client. There is nothing wrong in having several servers sharing the same cookie value, and it is in fact somewhat common between normal and backup servers. See also the "cookie" keyword in backend section. ``` **crl-file** <crlfile> ``` This setting is only available when support for OpenSSL was built in. It designates a PEM file from which to load certificate revocation list used to verify server's certificate. ``` **crt** <cert> ``` This setting is only available when support for OpenSSL was built in. It designates a PEM file from which to load both a certificate and the associated private key. This file can be built by concatenating both PEM files into one. This certificate will be sent if the server send a client certificate request. If the file does not contain a private key, HAProxy will try to load the key at the same path suffixed by a ".key" (provided the "[ssl-load-extra-files](#ssl-load-extra-files)" option is set accordingly). ``` **disabled** ``` The "disabled" keyword starts the server in the "disabled" state. That means that it is marked down in maintenance mode, and no connection other than the ones allowed by persist mode will reach it. It is very well suited to setup new servers, because normal traffic will never reach them, while it is still possible to test the service by making use of the force-persist mechanism. See also "enabled" setting. ``` **enabled** ``` This option may be used as 'server' setting to reset any 'disabled' setting which would have been inherited from 'default-server' directive as default value. It may also be used as 'default-server' setting to reset any previous 'default-server' 'disabled' setting. ``` **error-limit** <count> ``` If health observing is enabled, the "[error-limit](#error-limit)" parameter specifies the number of consecutive errors that triggers event selected by the "[on-error](#on-error)" option. By default it is set to 10 consecutive errors. See also the "[check](#check)", "[error-limit](#error-limit)" and "[on-error](#on-error)". ``` **fall** <count> ``` The "[fall](#fall)" parameter states that a server will be considered as dead after <count> consecutive unsuccessful health checks. This value defaults to 3 if unspecified. See also the "[check](#check)", "[inter](#inter)" and "[rise](#rise)" parameters. ``` **force-sslv3** ``` This option enforces use of SSLv3 only when SSL is used to communicate with the server. SSLv3 is generally less expensive than the TLS counterparts for high connection rates. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". See also "ssl-min-ver" and ssl-max-ver". ``` **force-tlsv10** ``` This option enforces use of TLSv1.0 only when SSL is used to communicate with the server. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". See also "ssl-min-ver" and ssl-max-ver". ``` **force-tlsv11** ``` This option enforces use of TLSv1.1 only when SSL is used to communicate with the server. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". See also "ssl-min-ver" and ssl-max-ver". ``` **force-tlsv12** ``` This option enforces use of TLSv1.2 only when SSL is used to communicate with the server. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". See also "ssl-min-ver" and ssl-max-ver". ``` **force-tlsv13** ``` This option enforces use of TLSv1.3 only when SSL is used to communicate with the server. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". See also "ssl-min-ver" and ssl-max-ver". ``` **id** <value> ``` Set a persistent ID for the server. This ID must be positive and unique for the proxy. An unused ID will automatically be assigned if unset. The first assigned value will be 1. This ID is currently only returned in statistics. ``` **init-addr** {last | libc | none | <ip>},[...]\* ``` Indicate in what order the server's address should be resolved upon startup if it uses an FQDN. Attempts are made to resolve the address by applying in turn each of the methods mentioned in the comma-delimited list. The first method which succeeds is used. If the end of the list is reached without finding a working method, an error is thrown. Method "last" suggests to pick the address which appears in the state file (see "[server-state-file](#server-state-file)"). Method "libc" uses the libc's internal resolver (gethostbyname() or getaddrinfo() depending on the operating system and build options). Method "none" specifically indicates that the server should start without any valid IP address in a down state. It can be useful to ignore some DNS issues upon startup, waiting for the situation to get fixed later. Finally, an IP address (IPv4 or IPv6) may be provided. It can be the currently known address of the server (e.g. filled by a configuration generator), or the address of a dummy server used to catch old sessions and present them with a decent error message for example. When the "first" load balancing algorithm is used, this IP address could point to a fake server used to trigger the creation of new instances on the fly. This option defaults to "last,libc" indicating that the previous address found in the state file (if any) is used first, otherwise the libc's resolver is used. This ensures continued compatibility with the historic behavior. ``` Example: ``` defaults # never fail on address resolution default-server init-addr last,libc,none ``` **inter** <delay> **fastinter** <delay> **downinter** <delay> ``` The "[inter](#inter)" parameter sets the interval between two consecutive health checks to <delay> milliseconds. If left unspecified, the delay defaults to 2000 ms. It is also possible to use "[fastinter](#fastinter)" and "[downinter](#downinter)" to optimize delays between checks depending on the server state : ``` | Server state | Interval used | | --- | --- | | UP 100% (non-transitional) | "[inter](#inter)" | | Transitionally UP (going down "[fall](#fall)"),Transitionally DOWN (going up "[rise](#rise)"),or yet unchecked. | "[fastinter](#fastinter)" if set,"[inter](#inter)" otherwise. | | DOWN 100% (non-transitional) | "[downinter](#downinter)" if set,"[inter](#inter)" otherwise. | ``` Just as with every other time-based parameter, they can be entered in any other explicit unit among { us, ms, s, m, h, d }. The "[inter](#inter)" parameter also serves as a timeout for health checks sent to servers if "[timeout check](#timeout%20check)" is not set. In order to reduce "resonance" effects when multiple servers are hosted on the same hardware, the agent and health checks of all servers are started with a small time offset between them. It is also possible to add some random noise in the agent and health checks interval using the global "[spread-checks](#spread-checks)" keyword. This makes sense for instance when a lot of backends use the same servers. ``` **log-proto** <logproto> ``` The "[log-proto](#log-proto)" specifies the protocol used to forward event messages to a server configured in a ring section. Possible values are "legacy" and "octet-count" corresponding respectively to "Non-transparent-framing" and "Octet counting" in rfc6587. "legacy" is the default. ``` **maxconn** <maxconn> ``` The "maxconn" parameter specifies the maximal number of concurrent connections that will be sent to this server. If the number of incoming concurrent connections goes higher than this value, they will be queued, waiting for a slot to be released. This parameter is very important as it can save fragile servers from going down under extreme loads. If a "[minconn](#minconn)" parameter is specified, the limit becomes dynamic. The default value is "0" which means unlimited. See also the "[minconn](#minconn)" and "[maxqueue](#maxqueue)" parameters, and the backend's "[fullconn](#fullconn)" keyword. In HTTP mode this parameter limits the number of concurrent requests instead of the number of connections. Multiple requests might be multiplexed over a single TCP connection to the server. As an example if you specify a maxconn of 50 you might see between 1 and 50 actual server connections, but no more than 50 concurrent requests. ``` **maxqueue** <maxqueue> ``` The "[maxqueue](#maxqueue)" parameter specifies the maximal number of connections which will wait in the queue for this server. If this limit is reached, next requests will be redispatched to other servers instead of indefinitely waiting to be served. This will break persistence but may allow people to quickly re-log in when the server they try to connect to is dying. Some load balancing algorithms such as leastconn take this into account and accept to add requests into a server's queue up to this value if it is explicitly set to a value greater than zero, which often allows to better smooth the load when dealing with single-digit maxconn values. The default value is "0" which means the queue is unlimited. See also the "maxconn" and "[minconn](#minconn)" parameters and "balance leastconn". ``` **max-reuse** <count> ``` The "[max-reuse](#max-reuse)" argument indicates the HTTP connection processors that they should not reuse a server connection more than this number of times to send new requests. Permitted values are -1 (the default), which disables this limit, or any positive value. Value zero will effectively disable keep-alive. This is only used to work around certain server bugs which cause them to leak resources over time. The argument is not necessarily respected by the lower layers as there might be technical limitations making it impossible to enforce. At least HTTP/2 connections to servers will respect it. ``` **minconn** <minconn> ``` When the "[minconn](#minconn)" parameter is set, the maxconn limit becomes a dynamic limit following the backend's load. The server will always accept at least <minconn> connections, never more than <maxconn>, and the limit will be on the ramp between both values when the backend has less than <fullconn> concurrent connections. This makes it possible to limit the load on the server during normal loads, but push it further for important loads without overloading the server during exceptional loads. See also the "maxconn" and "[maxqueue](#maxqueue)" parameters, as well as the "[fullconn](#fullconn)" backend keyword. ``` **namespace** <name> ``` On Linux, it is possible to specify which network namespace a socket will belong to. This directive makes it possible to explicitly bind a server to a namespace different from the default one. Please refer to your operating system's documentation to find more details about network namespaces. ``` **no-agent-check** ``` This option may be used as "server" setting to reset any "[agent-check](#agent-check)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[agent-check](#agent-check)" setting. ``` **no-backup** ``` This option may be used as "server" setting to reset any "[backup](#backup)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[backup](#backup)" setting. ``` **no-check** ``` This option may be used as "server" setting to reset any "[check](#check)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[check](#check)" setting. ``` **no-check-ssl** ``` This option may be used as "server" setting to reset any "[check-ssl](#check-ssl)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[check-ssl](#check-ssl)" setting. ``` **no-send-proxy** ``` This option may be used as "server" setting to reset any "[send-proxy](#send-proxy)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[send-proxy](#send-proxy)" setting. ``` **no-send-proxy-v2** ``` This option may be used as "server" setting to reset any "[send-proxy-v2](#send-proxy-v2)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[send-proxy-v2](#send-proxy-v2)" setting. ``` **no-send-proxy-v2-ssl** ``` This option may be used as "server" setting to reset any "[send-proxy-v2-ssl](#send-proxy-v2-ssl)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[send-proxy-v2-ssl](#send-proxy-v2-ssl)" setting. ``` **no-send-proxy-v2-ssl-cn** ``` This option may be used as "server" setting to reset any "[send-proxy-v2-ssl-cn](#send-proxy-v2-ssl-cn)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[send-proxy-v2-ssl-cn](#send-proxy-v2-ssl-cn)" setting. ``` **no-ssl** ``` This option may be used as "server" setting to reset any "ssl" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "ssl" setting. Note that using `default-server ssl` setting and `no-ssl` on server will however init SSL connection, so it can be later be enabled through the runtime API: see `set server` commands in management doc. ``` **no-ssl-reuse** ``` This option disables SSL session reuse when SSL is used to communicate with the server. It will force the server to perform a full handshake for every new connection. It's probably only useful for benchmarking, troubleshooting, and for paranoid users. ``` **no-sslv3** ``` This option disables support for SSLv3 when SSL is used to communicate with the server. Note that SSLv2 is disabled in the code and cannot be enabled using any configuration option. Use "ssl-min-ver" and "ssl-max-ver" instead. Supported in default-server: No ``` **no-tls-tickets** ``` This setting is only available when support for OpenSSL was built in. It disables the stateless session resumption (RFC 5077 TLS Ticket extension) and force to use stateful session resumption. Stateless session resumption is more expensive in CPU usage for servers. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". The TLS ticket mechanism is only used up to TLS 1.2. Forward Secrecy is compromised with TLS tickets, unless ticket keys are periodically rotated (via reload or by using "[tls-ticket-keys](#tls-ticket-keys)"). See also "[tls-tickets](#tls-tickets)". ``` **no-tlsv10** ``` This option disables support for TLSv1.0 when SSL is used to communicate with the server. Note that SSLv2 is disabled in the code and cannot be enabled using any configuration option. TLSv1 is more expensive than SSLv3 so it often makes sense to disable it when communicating with local servers. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. Supported in default-server: No ``` **no-tlsv11** ``` This option disables support for TLSv1.1 when SSL is used to communicate with the server. Note that SSLv2 is disabled in the code and cannot be enabled using any configuration option. TLSv1 is more expensive than SSLv3 so it often makes sense to disable it when communicating with local servers. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. Supported in default-server: No ``` **no-tlsv12** ``` This option disables support for TLSv1.2 when SSL is used to communicate with the server. Note that SSLv2 is disabled in the code and cannot be enabled using any configuration option. TLSv1 is more expensive than SSLv3 so it often makes sense to disable it when communicating with local servers. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. Supported in default-server: No ``` **no-tlsv13** ``` This option disables support for TLSv1.3 when SSL is used to communicate with the server. Note that SSLv2 is disabled in the code and cannot be enabled using any configuration option. TLSv1 is more expensive than SSLv3 so it often makes sense to disable it when communicating with local servers. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". Use "ssl-min-ver" and "ssl-max-ver" instead. Supported in default-server: No ``` **no-verifyhost** ``` This option may be used as "server" setting to reset any "[verifyhost](#verifyhost)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[verifyhost](#verifyhost)" setting. ``` **no-tfo** ``` This option may be used as "server" setting to reset any "tfo" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "tfo" setting. ``` **non-stick** ``` Never add connections allocated to this sever to a stick-table. This may be used in conjunction with backup to ensure that stick-table persistence is disabled for backup servers. ``` **npn** <protocols> ``` This enables the NPN TLS extension and advertises the specified protocol list as supported on top of NPN. The protocol list consists in a comma-delimited list of protocol names, for instance: "http/1.1,http/1.0" (without quotes). This requires that the SSL library is built with support for TLS extensions enabled (check with haproxy -vv). Note that the NPN extension has been replaced with the ALPN extension (see the "alpn" keyword), though this one is only available starting with OpenSSL 1.0.2. ``` **observe** <mode> ``` This option enables health adjusting based on observing communication with the server. By default this functionality is disabled and enabling it also requires to enable health checks. There are two supported modes: "layer4" and "layer7". In layer4 mode, only successful/unsuccessful tcp connections are significant. In layer7, which is only allowed for http proxies, responses received from server are verified, like valid/wrong http code, unparsable headers, a timeout, etc. Valid status codes include 100 to 499, 501 and 505. See also the "[check](#check)", "[on-error](#on-error)" and "[error-limit](#error-limit)". ``` **on-error** <mode> ``` Select what should happen when enough consecutive errors are detected. Currently, four modes are available: - fastinter: force fastinter - fail-check: simulate a failed check, also forces fastinter (default) - sudden-death: simulate a pre-fatal failed health check, one more failed check will mark a server down, forces fastinter - mark-down: mark the server immediately down and force fastinter See also the "[check](#check)", "[observe](#observe)" and "[error-limit](#error-limit)". ``` **on-marked-down** <action> ``` Modify what occurs when a server is marked down. Currently one action is available: - shutdown-sessions: Shutdown peer sessions. When this setting is enabled, all connections to the server are immediately terminated when the server goes down. It might be used if the health check detects more complex cases than a simple connection status, and long timeouts would cause the service to remain unresponsive for too long a time. For instance, a health check might detect that a database is stuck and that there's no chance to reuse existing connections anymore. Connections killed this way are logged with a 'D' termination code (for "Down"). Actions are disabled by default ``` **on-marked-up** <action> ``` Modify what occurs when a server is marked up. Currently one action is available: - shutdown-backup-sessions: Shutdown sessions on all backup servers. This is done only if the server is not in backup state and if it is not disabled (it must have an effective weight > 0). This can be used sometimes to force an active server to take all the traffic back after recovery when dealing with long sessions (e.g. LDAP, SQL, ...). Doing this can cause more trouble than it tries to solve (e.g. incomplete transactions), so use this feature with extreme care. Sessions killed because a server comes up are logged with an 'U' termination code (for "Up"). Actions are disabled by default ``` **pool-low-conn** <max> ``` Set a low threshold on the number of idling connections for a server, below which a thread will not try to steal a connection from another thread. This can be useful to improve CPU usage patterns in scenarios involving many very fast servers, in order to ensure all threads will keep a few idle connections all the time instead of letting them accumulate over one thread and migrating them from thread to thread. Typical values of twice the number of threads seem to show very good performance already with sub-millisecond response times. The default is zero, indicating that any idle connection can be used at any time. It is the recommended setting for normal use. This only applies to connections that can be shared according to the same principles as those applying to "[http-reuse](#http-reuse)". In case connection sharing between threads would be disabled via "[tune.idle-pool.shared](#tune.idle-pool.shared)", it can become very important to use this setting to make sure each thread always has a few connections, or the connection reuse rate will decrease as thread count increases. ``` **pool-max-conn** <max> ``` Set the maximum number of idling connections for a server. -1 means unlimited connections, 0 means no idle connections. The default is -1. When idle connections are enabled, orphaned idle connections which do not belong to any client session anymore are moved to a dedicated pool so that they remain usable by future clients. This only applies to connections that can be shared according to the same principles as those applying to "[http-reuse](#http-reuse)". ``` **pool-purge-delay** <delay> ``` Sets the delay to start purging idle connections. Each <delay> interval, half of the idle connections are closed. 0 means we don't keep any idle connection. The default is 5s. ``` **port** <port> ``` Using the "[port](#port)" parameter, it becomes possible to use a different port to send health-checks or to probe the agent-check. On some servers, it may be desirable to dedicate a port to a specific component able to perform complex tests which are more suitable to health-checks than the application. It is common to run a simple script in inetd for instance. This parameter is ignored if the "[check](#check)" parameter is not set. See also the "[addr](#addr)" parameter. ``` **proto** <name> ``` Forces the multiplexer's protocol to use for the outgoing connections to this server. It must be compatible with the mode of the backend (TCP or HTTP). It must also be usable on the backend side. The list of available protocols is reported in haproxy -vv.The protocols properties are reported : the mode (TCP/HTTP), the side (FE/BE), the mux name and its flags. Some protocols are subject to the head-of-line blocking on server side (flag=HOL_RISK). Finally some protocols don't support upgrades (flag=NO_UPG). The HTX compatibility is also reported (flag=HTX). Here are the protocols that may be used as argument to a "proto" directive on a server line : h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG Idea behind this option is to bypass the selection of the best multiplexer's protocol for all connections established to this server. See also "[ws](#ws)" to use an alternative protocol for websocket streams. ``` **redir** <prefix> ``` The "[redir](#redir)" parameter enables the redirection mode for all GET and HEAD requests addressing this server. This means that instead of having HAProxy forward the request to the server, it will send an "HTTP 302" response with the "Location" header composed of this prefix immediately followed by the requested URI beginning at the leading '/' of the path component. That means that no trailing slash should be used after <prefix>. All invalid requests will be rejected, and all non-GET or HEAD requests will be normally served by the server. Note that since the response is completely forged, no header mangling nor cookie insertion is possible in the response. However, cookies in requests are still analyzed, making this solution completely usable to direct users to a remote location in case of local disaster. Main use consists in increasing bandwidth for static servers by having the clients directly connect to them. Note: never use a relative location here, it would cause a loop between the client and HAProxy! ``` Example : ``` server srv1 192.168.1.1:80 redir http://image1.mydomain.com check ``` **rise** <count> ``` The "[rise](#rise)" parameter states that a server will be considered as operational after <count> consecutive successful health checks. This value defaults to 2 if unspecified. See also the "[check](#check)", "[inter](#inter)" and "[fall](#fall)" parameters. ``` **resolve-opts** <option>,<option>,... ``` Comma separated list of options to apply to DNS resolution linked to this server. Available options: * allow-dup-ip By default, HAProxy prevents IP address duplication in a backend when DNS resolution at runtime is in operation. That said, for some cases, it makes sense that two servers (in the same backend, being resolved by the same FQDN) have the same IP address. For such case, simply enable this option. This is the opposite of prevent-dup-ip. * ignore-weight Ignore any weight that is set within an SRV record. This is useful when you would like to control the weights using an alternate method, such as using an "[agent-check](#agent-check)" or through the runtime api. * prevent-dup-ip Ensure HAProxy's default behavior is enforced on a server: prevent re-using an IP address already set to a server in the same backend and sharing the same fqdn. This is the opposite of allow-dup-ip. ``` Example: ``` backend b_myapp default-server init-addr none resolvers dns server s1 myapp.example.com:80 check resolve-opts allow-dup-ip server s2 myapp.example.com:81 check resolve-opts allow-dup-ip ``` ``` With the option allow-dup-ip set: * if the nameserver returns a single IP address, then both servers will use it * If the nameserver returns 2 IP addresses, then each server will pick up a different address Default value: not set ``` **resolve-prefer** <family> ``` When DNS resolution is enabled for a server and multiple IP addresses from different families are returned, HAProxy will prefer using an IP address from the family mentioned in the "[resolve-prefer](#resolve-prefer)" parameter. Available families: "[ipv4](#ipv4)" and "[ipv6](#ipv6)" Default value: ipv6 ``` Example: ``` server s1 app1.domain.com:80 resolvers mydns resolve-prefer ipv6 ``` **resolve-net** <network>[,<network[,...]] ``` This option prioritizes the choice of an ip address matching a network. This is useful with clouds to prefer a local ip. In some cases, a cloud high availability service can be announced with many ip addresses on many different datacenters. The latency between datacenter is not negligible, so this patch permits to prefer a local datacenter. If no address matches the configured network, another address is selected. ``` Example: ``` server s1 app1.domain.com:80 resolvers mydns resolve-net 10.0.0.0/8 ``` **resolvers** <id> ``` Points to an existing "resolvers" section to resolve current server's hostname. ``` Example: ``` server s1 app1.domain.com:80 check resolvers mydns ``` ``` See also [section 5.3](#5.3) ``` **send-proxy** ``` The "[send-proxy](#send-proxy)" parameter enforces use of the PROXY protocol over any connection established to this server. The PROXY protocol informs the other end about the layer 3/4 addresses of the incoming connection, so that it can know the client's address or the public address it accessed to, whatever the upper layer protocol. For connections accepted by an "[accept-proxy](#accept-proxy)" or "[accept-netscaler-cip](#accept-netscaler-cip)" listener, the advertised address will be used. Only TCPv4 and TCPv6 address families are supported. Other families such as Unix sockets, will report an UNKNOWN family. Servers using this option can fully be chained to another instance of HAProxy listening with an "[accept-proxy](#accept-proxy)" setting. This setting must not be used if the server isn't aware of the protocol. When health checks are sent to the server, the PROXY protocol is automatically used when this option is set, unless there is an explicit "[port](#port)" or "[addr](#addr)" directive, in which case an explicit "[check-send-proxy](#check-send-proxy)" directive would also be needed to use the PROXY protocol. See also the "[no-send-proxy](#no-send-proxy)" option of this section and "[accept-proxy](#accept-proxy)" and "[accept-netscaler-cip](#accept-netscaler-cip)" option of the "bind" keyword. ``` **send-proxy-v2** ``` The "[send-proxy-v2](#send-proxy-v2)" parameter enforces use of the PROXY protocol version 2 over any connection established to this server. The PROXY protocol informs the other end about the layer 3/4 addresses of the incoming connection, so that it can know the client's address or the public address it accessed to, whatever the upper layer protocol. It also send ALPN information if an alpn have been negotiated. This setting must not be used if the server isn't aware of this version of the protocol. See also the "[no-send-proxy-v2](#no-send-proxy-v2)" option of this section and send-proxy" option of the "bind" keyword. ``` **proxy-v2-options** <option>[,<option>]\* ``` The "[proxy-v2-options](#proxy-v2-options)" parameter add options to send in PROXY protocol version 2 when "[send-proxy-v2](#send-proxy-v2)" is used. Options available are: - ssl : See also "[send-proxy-v2-ssl](#send-proxy-v2-ssl)". - cert-cn : See also "[send-proxy-v2-ssl-cn](#send-proxy-v2-ssl-cn)". - ssl-cipher: Name of the used cipher. - cert-sig : Signature algorithm of the used certificate. - cert-key : Key algorithm of the used certificate - authority : Host name value passed by the client (only SNI from a TLS connection is supported). - crc32c : Checksum of the PROXYv2 header. - unique-id : Send a unique ID generated using the frontend's "[unique-id-format](#unique-id-format)" within the PROXYv2 header. This unique-id is primarily meant for "mode tcp". It can lead to unexpected results in "mode http", because the generated unique ID is also used for the first HTTP request within a Keep-Alive connection. ``` **send-proxy-v2-ssl** ``` The "[send-proxy-v2-ssl](#send-proxy-v2-ssl)" parameter enforces use of the PROXY protocol version 2 over any connection established to this server. The PROXY protocol informs the other end about the layer 3/4 addresses of the incoming connection, so that it can know the client's address or the public address it accessed to, whatever the upper layer protocol. In addition, the SSL information extension of the PROXY protocol is added to the PROXY protocol header. This setting must not be used if the server isn't aware of this version of the protocol. See also the "[no-send-proxy-v2-ssl](#no-send-proxy-v2-ssl)" option of this section and the "[send-proxy-v2](#send-proxy-v2)" option of the "bind" keyword. ``` **send-proxy-v2-ssl-cn** ``` The "[send-proxy-v2-ssl](#send-proxy-v2-ssl)" parameter enforces use of the PROXY protocol version 2 over any connection established to this server. The PROXY protocol informs the other end about the layer 3/4 addresses of the incoming connection, so that it can know the client's address or the public address it accessed to, whatever the upper layer protocol. In addition, the SSL information extension of the PROXY protocol, along along with the Common Name from the subject of the client certificate (if any), is added to the PROXY protocol header. This setting must not be used if the server isn't aware of this version of the protocol. See also the "[no-send-proxy-v2-ssl-cn](#no-send-proxy-v2-ssl-cn)" option of this section and the "[send-proxy-v2](#send-proxy-v2)" option of the "bind" keyword. ``` **shard** <shard> ``` This parameter in used only in the context of stick-tables synchronisation with peers protocol. The "[shard](#shard)" parameter identifies the peers which will receive all the stick-table updates for keys with this shard as distribution hash. The accepted values are 0 up to "shards" parameter value specified in the "[peers](#peers)" section. 0 value is the default value meaning that the peer will receive all the key updates. Greater values than "shards" will be ignored. This is also the case for any value provided to the local peer. ``` Example : ``` peers mypeers shards 3 peer A 127.0.0.1:40001 # local peer without shard value (0 internally) peer B 127.0.0.1:40002 shard 1 peer C 127.0.0.1:40003 shard 2 peer D 127.0.0.1:40004 shard 3 ``` **slowstart** <start\_time\_in\_ms> ``` The "[slowstart](#slowstart)" parameter for a server accepts a value in milliseconds which indicates after how long a server which has just come back up will run at full speed. Just as with every other time-based parameter, it can be entered in any other explicit unit among { us, ms, s, m, h, d }. The speed grows linearly from 0 to 100% during this time. The limitation applies to two parameters : - maxconn: the number of connections accepted by the server will grow from 1 to 100% of the usual dynamic limit defined by (minconn,maxconn,fullconn). - weight: when the backend uses a dynamic weighted algorithm, the weight grows linearly from 1 to 100%. In this case, the weight is updated at every health-check. For this reason, it is important that the "[inter](#inter)" parameter is smaller than the "[slowstart](#slowstart)", in order to maximize the number of steps. The slowstart never applies when HAProxy starts, otherwise it would cause trouble to running servers. It only applies when a server has been previously seen as failed. ``` **sni** <expression> ``` The "[sni](#sni)" parameter evaluates the sample fetch expression, converts it to a string and uses the result as the host name sent in the SNI TLS extension to the server. A typical use case is to send the SNI received from the client in a bridged TCP/SSL scenario, using the "[ssl\_fc\_sni](#ssl_fc_sni)" sample fetch for the expression. THIS MUST NOT BE USED FOR HTTPS, where req.hdr(host) should be used instead, since SNI in HTTPS must always match the Host field and clients are allowed to use different host names over the same connection). If "verify required" is set (which is the recommended setting), the resulting name will also be matched against the server certificate's names. See the "verify" directive for more details. If you want to set a SNI for health checks, see the "[check-sni](#check-sni)" directive for more details. ``` **source** <addr>[:<pl>[-<ph>]] [usesrc { <addr2>[:<port2>] | client | clientip } ] **source** <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr\_ip(<hdr>[,<occ>]) } ] **source** <addr>[:<pl>[-<ph>]] [interface <name>] ... ``` The "source" parameter sets the source address which will be used when connecting to the server. It follows the exact same parameters and principle as the backend "source" keyword, except that it only applies to the server referencing it. Please consult the "source" keyword for details. Additionally, the "source" statement on a server line allows one to specify a source port range by indicating the lower and higher bounds delimited by a dash ('-'). Some operating systems might require a valid IP address when a source port range is specified. It is permitted to have the same IP/range for several servers. Doing so makes it possible to bypass the maximum of 64k total concurrent connections. The limit will then reach 64k connections per server. Since Linux 4.2/libc 2.23 IP_BIND_ADDRESS_NO_PORT is set for connections specifying the source address without port(s). ``` **ssl** ``` This option enables SSL ciphering on outgoing connections to the server. It is critical to verify server certificates using "verify" when using SSL to connect to servers, otherwise the communication is prone to trivial man in the-middle attacks rendering SSL useless. When this option is used, health checks are automatically sent in SSL too unless there is a "[port](#port)" or an "[addr](#addr)" directive indicating the check should be sent to a different location. See the "[no-ssl](#no-ssl)" to disable "ssl" option and "[check-ssl](#check-ssl)" option to force SSL health checks. ``` **ssl-max-ver** [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] ``` This option enforces use of <version> or lower when SSL is used to communicate with the server. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". See also "ssl-min-ver". ``` **ssl-min-ver** [ SSLv3 | TLSv1.0 | TLSv1.1 | TLSv1.2 | TLSv1.3 ] ``` This option enforces use of <version> or upper when SSL is used to communicate with the server. This option is also available on global statement "[ssl-default-server-options](#ssl-default-server-options)". See also "ssl-max-ver". ``` **ssl-reuse** ``` This option may be used as "server" setting to reset any "[no-ssl-reuse](#no-ssl-reuse)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[no-ssl-reuse](#no-ssl-reuse)" setting. ``` **stick** ``` This option may be used as "server" setting to reset any "[non-stick](#non-stick)" setting which would have been inherited from "default-server" directive as default value. It may also be used as "default-server" setting to reset any previous "default-server" "[non-stick](#non-stick)" setting. ``` **socks4** <addr>:<port> ``` This option enables upstream socks4 tunnel for outgoing connections to the server. Using this option won't force the health check to go via socks4 by default. You will have to use the keyword "[check-via-socks4](#check-via-socks4)" to enable it. ``` **tcp-ut** <delay> ``` Sets the TCP User Timeout for all outgoing connections to this server. This option is available on Linux since version 2.6.37. It allows HAProxy to configure a timeout for sockets which contain data not receiving an acknowledgment for the configured delay. This is especially useful on long-lived connections experiencing long idle periods such as remote terminals or database connection pools, where the client and server timeouts must remain high to allow a long period of idle, but where it is important to detect that the server has disappeared in order to release all resources associated with its connection (and the client's session). One typical use case is also to force dead server connections to die when health checks are too slow or during a soft reload since health checks are then disabled. The argument is a delay expressed in milliseconds by default. This only works for regular TCP connections, and is ignored for other protocols. ``` **tfo** ``` This option enables using TCP fast open when connecting to servers, on systems that support it (currently only the Linux kernel >= 4.11). See the "tfo" bind option for more information about TCP fast open. Please note that when using tfo, you should also use the "conn-failure", "empty-response" and "response-timeout" keywords for "[retry-on](#retry-on)", or HAProxy won't be able to retry the connection on failure. See also "[no-tfo](#no-tfo)". ``` **track** [<proxy>/]<server> ``` This option enables ability to set the current state of the server by tracking another one. It is possible to track a server which itself tracks another server, provided that at the end of the chain, a server has health checks enabled. If <proxy> is omitted the current one is used. If disable-on-404 is used, it has to be enabled on both proxies. ``` **tls-tickets** ``` This option may be used as "server" setting to reset any "no-tls-tickets" setting which would have been inherited from "default-server" directive as default value. The TLS ticket mechanism is only used up to TLS 1.2. Forward Secrecy is compromised with TLS tickets, unless ticket keys are periodically rotated (via reload or by using "[tls-ticket-keys](#tls-ticket-keys)"). It may also be used as "default-server" setting to reset any previous "default-server" "no-tls-tickets" setting. ``` **verify** [none|required] ``` This setting is only available when support for OpenSSL was built in. If set to 'none', server certificate is not verified. In the other case, The certificate provided by the server is verified using CAs from 'ca-file' and optional CRLs from 'crl-file' after having checked that the names provided in the certificate's subject and subjectAlternateNames attributes match either the name passed using the "[sni](#sni)" directive, or if not provided, the static host name passed using the "[verifyhost](#verifyhost)" directive. When no name is found, the certificate's names are ignored. For this reason, without SNI it's important to use "[verifyhost](#verifyhost)". On verification failure the handshake is aborted. It is critically important to verify server certificates when using SSL to connect to servers, otherwise the communication is prone to trivial man-in-the-middle attacks rendering SSL totally useless. Unless "ssl_server_verify" appears in the global section, "verify" is set to "required" by default. ``` **verifyhost** <hostname> ``` This setting is only available when support for OpenSSL was built in, and only takes effect if 'verify required' is also specified. This directive sets a default static hostname to check the server's certificate against when no SNI was used to connect to the server. If SNI is not used, this is the only way to enable hostname verification. This static hostname, when set, will also be used for health checks (which cannot provide an SNI value). If none of the hostnames in the certificate match the specified hostname, the handshake is aborted. The hostnames in the server-provided certificate may include wildcards. See also "verify", "[sni](#sni)" and "[no-verifyhost](#no-verifyhost)" options. ``` **weight** <weight> ``` The "[weight](#weight)" parameter is used to adjust the server's weight relative to other servers. All servers will receive a load proportional to their weight relative to the sum of all weights, so the higher the weight, the higher the load. The default weight is 1, and the maximal value is 256. A value of 0 means the server will not participate in load-balancing but will still accept persistent connections. If this parameter is used to distribute the load according to server's capacity, it is recommended to start with values which can both grow and shrink, for instance between 10 and 100 to leave enough room above and below for later adjustments. ``` **ws** { auto | h1 | h2 } ``` This option allows to configure the protocol used when relaying websocket streams. This is most notably useful when using an HTTP/2 backend without the support for H2 websockets through the RFC8441. The default mode is "auto". This will reuse the same protocol as the main one. The only difference is when using ALPN. In this case, it can try to downgrade the ALPN to "http/1.1" only for websocket streams if the configured server ALPN contains it. The value "h1" is used to force HTTP/1.1 for websockets streams, through ALPN if SSL ALPN is activated for the server. Similarly, "h2" can be used to force HTTP/2.0 websockets. Use this value with care : the server must support RFC8441 or an error will be reported by haproxy when relaying websockets. Note that NPN is not taken into account as its usage has been deprecated in favor of the ALPN extension. See also "alpn" and "proto". ``` ### 5.3. Server IP address resolution using DNS ``` HAProxy allows using a host name on the server line to retrieve its IP address using name servers. By default, HAProxy resolves the name when parsing the configuration file, at startup and cache the result for the process's life. This is not sufficient in some cases, such as in Amazon where a server's IP can change after a reboot or an ELB Virtual IP can change based on current workload. This chapter describes how HAProxy can be configured to process server's name resolution at run time. Whether run time server name resolution has been enable or not, HAProxy will carry on doing the first resolution when parsing the configuration. ``` #### 5.3.1. Global overview ``` As we've seen in introduction, name resolution in HAProxy occurs at two different steps of the process life: 1. when starting up, HAProxy parses the server line definition and matches a host name. It uses libc functions to get the host name resolved. This resolution relies on /etc/resolv.conf file. 2. at run time, HAProxy performs periodically name resolutions for servers requiring DNS resolutions. A few other events can trigger a name resolution at run time: - when a server's health check ends up in a connection timeout: this may be because the server has a new IP address. So we need to trigger a name resolution to know this new IP. When using resolvers, the server name can either be a hostname, or a SRV label. HAProxy considers anything that starts with an underscore as a SRV label. If a SRV label is specified, then the corresponding SRV records will be retrieved from the DNS server, and the provided hostnames will be used. The SRV label will be checked periodically, and if any server are added or removed, HAProxy will automatically do the same. A few things important to notice: - all the name servers are queried in the meantime. HAProxy will process the first valid response. - a resolution is considered as invalid (NX, timeout, refused), when all the servers return an error. ``` #### 5.3.2. The resolvers section ``` This section is dedicated to host information related to name resolution in HAProxy. There can be as many as resolvers section as needed. Each section can contain many name servers. At startup, HAProxy tries to generate a resolvers section named "default", if no section was named this way in the configuration. This section is used by default by the httpclient and uses the parse-resolv-conf keyword. If HAProxy failed to generate automatically this section, no error or warning are emitted. When multiple name servers are configured in a resolvers section, then HAProxy uses the first valid response. In case of invalid responses, only the last one is treated. Purpose is to give the chance to a slow server to deliver a valid answer after a fast faulty or outdated server. When each server returns a different error type, then only the last error is used by HAProxy. The following processing is applied on this error: 1. HAProxy retries the same DNS query with a new query type. The A queries are switch to AAAA or the opposite. SRV queries are not concerned here. Timeout errors are also excluded. 2. When the fallback on the query type was done (or not applicable), HAProxy retries the original DNS query, with the preferred query type. 3. HAProxy retries previous steps <resolve_retries> times. If no valid response is received after that, it stops the DNS resolution and reports the error. For example, with 2 name servers configured in a resolvers section, the following scenarios are possible: - First response is valid and is applied directly, second response is ignored - First response is invalid and second one is valid, then second response is applied - First response is a NX domain and second one a truncated response, then HAProxy retries the query with a new type - First response is a NX domain and second one is a timeout, then HAProxy retries the query with a new type - Query timed out for both name servers, then HAProxy retries it with the same query type As a DNS server may not answer all the IPs in one DNS request, HAProxy keeps a cache of previous answers, an answer will be considered obsolete after <hold obsolete> seconds without the IP returned. ``` **resolvers** <resolvers id> ``` Creates a new name server list labeled <resolvers id> A resolvers section accept the following parameters: ``` **accepted\_payload\_size** <nb> ``` Defines the maximum payload size accepted by HAProxy and announced to all the name servers configured in this resolvers section. <nb> is in bytes. If not set, HAProxy announces 512. (minimal value defined by RFC 6891) Note: the maximum allowed value is 65535. Recommended value for UDP is 4096 and it is not recommended to exceed 8192 except if you are sure that your system and network can handle this (over 65507 makes no sense since is the maximum UDP payload size). If you are using only TCP nameservers to handle huge DNS responses, you should put this value to the max: 65535. ``` **nameserver** <name> <address>[:port] [param\*] ``` Used to configure a nameserver. <name> of the nameserver should ne unique. By default the <address> is considered of type datagram. This means if an IPv4 or IPv6 is configured without special address prefixes (paragraph 11.) the UDP protocol will be used. If an stream protocol address prefix is used, the nameserver will be considered as a stream server (TCP for instance) and "server" parameters found in 5.2 paragraph which are relevant for DNS resolving will be considered. Note: currently, in TCP mode, 4 queries are pipelined on the same connections. A batch of idle connections are removed every 5 seconds. "maxconn" can be configured to limit the amount of those concurrent connections and TLS should also usable if the server supports. ``` **parse-resolv-conf** ``` Adds all nameservers found in /etc/resolv.conf to this resolvers nameservers list. Ordered as if each nameserver in /etc/resolv.conf was individually placed in the resolvers section in place of this directive. ``` **hold** <status> <period> ``` Defines <period> during which the last name resolution should be kept based on last resolution <status> <status> : last name resolution status. Acceptable values are "nx", "other", "refused", "timeout", "valid", "obsolete". <period> : interval between two successive name resolution when the last answer was in <status>. It follows the HAProxy time format. <period> is in milliseconds by default. Default value is 10s for "valid", 0s for "obsolete" and 30s for others. ``` **resolve\_retries** <nb> ``` Defines the number <nb> of queries to send to resolve a server name before giving up. Default value: 3 A retry occurs on name server timeout or when the full sequence of DNS query type failover is over and we need to start up from the default ANY query type. ``` **timeout** <event> <time> ``` Defines timeouts related to name resolution <event> : the event on which the <time> timeout period applies to. events available are: - resolve : default time to trigger name resolutions when no other time applied. Default value: 1s - retry : time between two DNS queries, when no valid response have been received. Default value: 1s <time> : time related to the event. It follows the HAProxy time format. <time> is expressed in milliseconds. ``` Example: ``` resolvers mydns nameserver dns1 10.0.0.1:53 nameserver dns2 10.0.0.2:53 nameserver dns3 [email protected]:53 parse-resolv-conf resolve_retries 3 timeout resolve 1s timeout retry 1s hold other 30s hold refused 30s hold nx 30s hold timeout 30s hold valid 10s hold obsolete 30s ``` 6. Cache --------- ``` HAProxy provides a cache, which was designed to perform cache on small objects (favicon, css...). This is a minimalist low-maintenance cache which runs in RAM. The cache is based on a memory area shared between all threads, and split in 1kB blocks. If an object is not used anymore, it can be deleted to store a new object independently of its expiration date. The oldest objects are deleted first when we try to allocate a new one. The cache uses a hash of the host header and the URI as the key. It's possible to view the status of a cache using the Unix socket command "show cache" consult [section 9.3](#9.3) "Unix Socket commands" of Management Guide for more details. When an object is delivered from the cache, the server name in the log is replaced by "<CACHE>". ``` ### 6.1. Limitation ``` The cache won't store and won't deliver objects in these cases: - If the response is not a 200 - If the response contains a Vary header and either the process-vary option is disabled, or a currently unmanaged header is specified in the Vary value (only accept-encoding and referer are managed for now) - If the Content-Length + the headers size is greater than "[max-object-size](#max-object-size)" - If the response is not cacheable - If the response does not have an explicit expiration time (s-maxage or max-age Cache-Control directives or Expires header) or a validator (ETag or Last-Modified headers) - If the process-vary option is enabled and there are already max-secondary-entries entries with the same primary key as the current response - If the process-vary option is enabled and the response has an unknown encoding (not mentioned in https://www.iana.org/assignments/http-parameters/http-parameters.xhtml) while varying on the accept-encoding client header - If the request is not a GET - If the HTTP version of the request is smaller than 1.1 - If the request contains an Authorization header ``` ### 6.2. Setup ``` To setup a cache, you must define a cache section and use it in a proxy with the corresponding http-request and response actions. ``` #### 6.2.1. Cache section **cache** <name> ``` Declare a cache section, allocate a shared cache memory named <name>, the size of cache is mandatory. ``` **total-max-size** <megabytes> ``` Define the size in RAM of the cache in megabytes. This size is split in blocks of 1kB which are used by the cache entries. Its maximum value is 4095. ``` **max-object-size** <bytes> ``` Define the maximum size of the objects to be cached. Must not be greater than an half of "[total-max-size](#total-max-size)". If not set, it equals to a 256th of the cache size. All objects with sizes larger than "[max-object-size](#max-object-size)" will not be cached. ``` **max-age** <seconds> ``` Define the maximum expiration duration. The expiration is set as the lowest value between the s-maxage or max-age (in this order) directive in the Cache-Control response header and this value. The default value is 60 seconds, which means that you can't cache an object more than 60 seconds by default. ``` **process-vary** <on/off> ``` Enable or disable the processing of the Vary header. When disabled, a response containing such a header will never be cached. When enabled, we need to calculate a preliminary hash for a subset of request headers on all the incoming requests (which might come with a cpu cost) which will be used to build a secondary key for a given request (see RFC 7234#4.1). The default value is off (disabled). ``` **max-secondary-entries** <number> ``` Define the maximum number of simultaneous secondary entries with the same primary key in the cache. This needs the vary support to be enabled. Its default value is 10 and should be passed a strictly positive integer. ``` #### 6.2.2. Proxy section **http-request cache-use** <name> [ { if | unless } <condition> ] ``` Try to deliver a cached object from the cache <name>. This directive is also mandatory to store the cache as it calculates the cache hash. If you want to use a condition for both storage and delivering that's a good idea to put it after this one. ``` **http-response cache-store** <name> [ { if | unless } <condition> ] ``` Store an http-response within the cache. The storage of the response headers is done at this step, which means you can use others http-response actions to modify headers before or after the storage of the response. This action is responsible for the setup of the cache storage filter. ``` Example: ``` backend bck1 mode http http-request cache-use foobar http-response cache-store foobar server srv1 127.0.0.1:80 cache foobar total-max-size 4 max-age 240 ``` 7. Using ACLs and fetching samples ----------------------------------- ``` HAProxy is capable of extracting data from request or response streams, from client or server information, from tables, environmental information etc... The action of extracting such data is called fetching a sample. Once retrieved, these samples may be used for various purposes such as a key to a stick-table, but most common usages consist in matching them against predefined constant data called patterns. ``` ### 7.1. ACL basics ``` The use of Access Control Lists (ACL) provides a flexible solution to perform content switching and generally to take decisions based on content extracted from the request, the response or any environmental status. The principle is simple : - extract a data sample from a stream, table or the environment - optionally apply some format conversion to the extracted sample - apply one or multiple pattern matching methods on this sample - perform actions only when a pattern matches the sample The actions generally consist in blocking a request, selecting a backend, or adding a header. In order to define a test, the "acl" keyword is used. The syntax is : acl <aclname> <criterion> [flags] [operator] [<value>] ... This creates a new ACL <aclname> or completes an existing one with new tests. Those tests apply to the portion of request/response specified in <criterion> and may be adjusted with optional flags [flags]. Some criteria also support an operator which may be specified before the set of values. Optionally some conversion operators may be applied to the sample, and they will be specified as a comma-delimited list of keywords just after the first keyword. The values are of the type supported by the criterion, and are separated by spaces. ACL names must be formed from upper and lower case letters, digits, '-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). ACL names are case-sensitive, which means that "my_acl" and "My_Acl" are two different ACLs. There is no enforced limit to the number of ACLs. The unused ones do not affect performance, they just consume a small amount of memory. The criterion generally is the name of a sample fetch method, or one of its ACL specific declinations. The default test method is implied by the output type of this sample fetch method. The ACL declinations can describe alternate matching methods of a same sample fetch method. The sample fetch methods are the only ones supporting a conversion. Sample fetch methods return data which can be of the following types : - boolean - integer (signed or unsigned) - IPv4 or IPv6 address - string - data block Converters transform any of these data into any of these. For example, some converters might convert a string to a lower-case string while other ones would turn a string to an IPv4 address, or apply a netmask to an IP address. The resulting sample is of the type of the last converter applied to the list, which defaults to the type of the sample fetch method. Each sample or converter returns data of a specific type, specified with its keyword in this documentation. When an ACL is declared using a standard sample fetch method, certain types automatically involved a default matching method which are summarized in the table below : +---------------------+-----------------+ | Sample or converter | Default | | output type | matching method | +---------------------+-----------------+ | boolean | bool | +---------------------+-----------------+ | integer | int | +---------------------+-----------------+ | ip | ip | +---------------------+-----------------+ | string | str | +---------------------+-----------------+ | binary | none, use "-m" | +---------------------+-----------------+ Note that in order to match a binary samples, it is mandatory to specify a matching method, see below. The ACL engine can match these types against patterns of the following types : - boolean - integer or integer range - IP address / network - string (exact, substring, suffix, prefix, subdir, domain) - regular expression - hex block The following ACL flags are currently supported : -i : ignore case during matching of all subsequent patterns. -f : load patterns from a file. -m : use a specific pattern matching method -n : forbid the DNS resolutions -M : load the file pointed by -f like a map file. -u : force the unique id of the ACL -- : force end of flags. Useful when a string looks like one of the flags. The "-f" flag is followed by the name of a file from which all lines will be read as individual values. It is even possible to pass multiple "-f" arguments if the patterns are to be loaded from multiple files. Empty lines as well as lines beginning with a sharp ('#') will be ignored. All leading spaces and tabs will be stripped. If it is absolutely necessary to insert a valid pattern beginning with a sharp, just prefix it with a space so that it is not taken for a comment. Depending on the data type and match method, HAProxy may load the lines into a binary tree, allowing very fast lookups. This is true for IPv4 and exact string matching. In this case, duplicates will automatically be removed. The "-M" flag allows an ACL to use a map file. If this flag is set, the file is parsed as two column file. The first column contains the patterns used by the ACL, and the second column contain the samples. The sample can be used later by a map. This can be useful in some rare cases where an ACL would just be used to check for the existence of a pattern in a map before a mapping is applied. The "-u" flag forces the unique id of the ACL. This unique id is used with the socket interface to identify ACL and dynamically change its values. Note that a file is always identified by its name even if an id is set. Also, note that the "-i" flag applies to subsequent entries and not to entries loaded from files preceding it. For instance : acl valid-ua hdr(user-agent) -f exact-ua.lst -i -f generic-ua.lst test In this example, each line of "exact-ua.lst" will be exactly matched against the "user-agent" header of the request. Then each line of "generic-ua" will be case-insensitively matched. Then the word "test" will be insensitively matched as well. The "-m" flag is used to select a specific pattern matching method on the input sample. All ACL-specific criteria imply a pattern matching method and generally do not need this flag. However, this flag is useful with generic sample fetch methods to describe how they're going to be matched against the patterns. This is required for sample fetches which return data type for which there is no obvious matching method (e.g. string or binary). When "-m" is specified and followed by a pattern matching method name, this method is used instead of the default one for the criterion. This makes it possible to match contents in ways that were not initially planned, or with sample fetch methods which return a string. The matching method also affects the way the patterns are parsed. The "-n" flag forbids the dns resolutions. It is used with the load of ip files. By default, if the parser cannot parse ip address it considers that the parsed string is maybe a domain name and try dns resolution. The flag "-n" disable this resolution. It is useful for detecting malformed ip lists. Note that if the DNS server is not reachable, the HAProxy configuration parsing may last many minutes waiting for the timeout. During this time no error messages are displayed. The flag "-n" disable this behavior. Note also that during the runtime, this function is disabled for the dynamic acl modifications. There are some restrictions however. Not all methods can be used with all sample fetch methods. Also, if "-m" is used in conjunction with "-f", it must be placed first. The pattern matching method must be one of the following : - "found" : only check if the requested sample could be found in the stream, but do not compare it against any pattern. It is recommended not to pass any pattern to avoid confusion. This matching method is particularly useful to detect presence of certain contents such as headers, cookies, etc... even if they are empty and without comparing them to anything nor counting them. - "bool" : check the value as a boolean. It can only be applied to fetches which return a boolean or integer value, and takes no pattern. Value zero or false does not match, all other values do match. - "[int](#int)" : match the value as an integer. It can be used with integer and boolean samples. Boolean false is integer 0, true is integer 1. - "ip" : match the value as an IPv4 or IPv6 address. It is compatible with IP address samples only, so it is implied and never needed. - "[bin](#bin)" : match the contents against a hexadecimal string representing a binary sequence. This may be used with binary or string samples. - "len" : match the sample's length as an integer. This may be used with binary or string samples. - "[str](#str)" : exact match : match the contents against a string. This may be used with binary or string samples. - "[sub](#sub)" : substring match : check that the contents contain at least one of the provided string patterns. This may be used with binary or string samples. - "reg" : regex match : match the contents against a list of regular expressions. This may be used with binary or string samples. - "beg" : prefix match : check that the contents begin like the provided string patterns. This may be used with binary or string samples. - "end" : suffix match : check that the contents end like the provided string patterns. This may be used with binary or string samples. - "dir" : subdir match : check that a slash-delimited portion of the contents exactly matches one of the provided string patterns. This may be used with binary or string samples. - "dom" : domain match : check that a dot-delimited portion of the contents exactly match one of the provided string patterns. This may be used with binary or string samples. For example, to quickly detect the presence of cookie "JSESSIONID" in an HTTP request, it is possible to do : acl jsess_present req.cook(JSESSIONID) -m found In order to apply a regular expression on the 500 first bytes of data in the buffer, one would use the following acl : acl script_tag req.payload(0,500) -m reg -i <script> On systems where the regex library is much slower when using "-i", it is possible to convert the sample to lowercase before matching, like this : acl script_tag req.payload(0,500),lower -m reg <script> All ACL-specific criteria imply a default matching method. Most often, these criteria are composed by concatenating the name of the original sample fetch method and the matching method. For example, "hdr_beg" applies the "beg" match to samples retrieved using the "[hdr](#hdr)" fetch method. This matching method is only usable when the keyword is used alone, without any converter. In case any such converter were to be applied after such an ACL keyword, the default matching method from the ACL keyword is simply ignored since what will matter for the matching is the output type of the last converter. Since all ACL-specific criteria rely on a sample fetch method, it is always possible instead to use the original sample fetch method and the explicit matching method using "-m". If an alternate match is specified using "-m" on an ACL-specific criterion, the matching method is simply applied to the underlying sample fetch method. For example, all ACLs below are exact equivalent : acl short_form hdr_beg(host) www. acl alternate1 hdr_beg(host) -m beg www. acl alternate2 hdr_dom(host) -m beg www. acl alternate3 hdr(host) -m beg www. The table below summarizes the compatibility matrix between sample or converter types and the pattern types to fetch against. It indicates for each compatible combination the name of the matching method to be used, surrounded with angle brackets ">" and "<" when the method is the default one and will work by default without "-m". +-------------------------------------------------+ | Input sample type | +----------------------+---------+---------+---------+---------+---------+ | pattern type | boolean | integer | ip | string | binary | +----------------------+---------+---------+---------+---------+---------+ | none (presence only) | found | found | found | found | found | +----------------------+---------+---------+---------+---------+---------+ | none (boolean value) |> bool <| bool | | bool | | +----------------------+---------+---------+---------+---------+---------+ | integer (value) | int |> int <| int | int | | +----------------------+---------+---------+---------+---------+---------+ | integer (length) | len | len | len | len | len | +----------------------+---------+---------+---------+---------+---------+ | IP address | | |> ip <| ip | ip | +----------------------+---------+---------+---------+---------+---------+ | exact string | str | str | str |> str <| str | +----------------------+---------+---------+---------+---------+---------+ | prefix | beg | beg | beg | beg | beg | +----------------------+---------+---------+---------+---------+---------+ | suffix | end | end | end | end | end | +----------------------+---------+---------+---------+---------+---------+ | substring | sub | sub | sub | sub | sub | +----------------------+---------+---------+---------+---------+---------+ | subdir | dir | dir | dir | dir | dir | +----------------------+---------+---------+---------+---------+---------+ | domain | dom | dom | dom | dom | dom | +----------------------+---------+---------+---------+---------+---------+ | regex | reg | reg | reg | reg | reg | +----------------------+---------+---------+---------+---------+---------+ | hex block | | | | bin | bin | +----------------------+---------+---------+---------+---------+---------+ ``` #### 7.1.1. Matching booleans ``` In order to match a boolean, no value is needed and all values are ignored. Boolean matching is used by default for all fetch methods of type "boolean". When boolean matching is used, the fetched value is returned as-is, which means that a boolean "true" will always match and a boolean "false" will never match. Boolean matching may also be enforced using "-m bool" on fetch methods which return an integer value. Then, integer value 0 is converted to the boolean "false" and all other values are converted to "true". ``` #### 7.1.2. Matching integers ``` Integer matching applies by default to integer fetch methods. It can also be enforced on boolean fetches using "-m int". In this case, "false" is converted to the integer 0, and "true" is converted to the integer 1. Integer matching also supports integer ranges and operators. Note that integer matching only applies to positive values. A range is a value expressed with a lower and an upper bound separated with a colon, both of which may be omitted. For instance, "1024:65535" is a valid range to represent a range of unprivileged ports, and "1024:" would also work. "0:1023" is a valid representation of privileged ports, and ":1023" would also work. As a special case, some ACL functions support decimal numbers which are in fact two integers separated by a dot. This is used with some version checks for instance. All integer properties apply to those decimal numbers, including ranges and operators. For an easier usage, comparison operators are also supported. Note that using operators with ranges does not make much sense and is strongly discouraged. Similarly, it does not make much sense to perform order comparisons with a set of values. Available operators for integer matching are : eq : true if the tested value equals at least one value ge : true if the tested value is greater than or equal to at least one value gt : true if the tested value is greater than at least one value le : true if the tested value is less than or equal to at least one value lt : true if the tested value is less than at least one value For instance, the following ACL matches any negative Content-Length header : acl negative-length req.hdr_val(content-length) lt 0 This one matches SSL versions between 3.0 and 3.1 (inclusive) : acl sslv3 req.ssl_ver 3:3.1 ``` #### 7.1.3. Matching strings ``` String matching applies to string or binary fetch methods, and exists in 6 different forms : - exact match (-m str) : the extracted string must exactly match the patterns; - substring match (-m sub) : the patterns are looked up inside the extracted string, and the ACL matches if any of them is found inside; - prefix match (-m beg) : the patterns are compared with the beginning of the extracted string, and the ACL matches if any of them matches. - suffix match (-m end) : the patterns are compared with the end of the extracted string, and the ACL matches if any of them matches. - subdir match (-m dir) : the patterns are looked up anywhere inside the extracted string, delimited with slashes ("/"), the beginning or the end of the string. The ACL matches if any of them matches. As such, the string "/images/png/logo/32x32.png", would match "/images", "/images/png", "images/png", "/png/logo", "logo/32x32.png" or "32x32.png" but not "png" nor "32x32". - domain match (-m dom) : the patterns are looked up anywhere inside the extracted string, delimited with dots ("."), colons (":"), slashes ("/"), question marks ("?"), the beginning or the end of the string. This is made to be used with URLs. Leading and trailing delimiters in the pattern are ignored. The ACL matches if any of them matches. As such, in the example string "http://www1.dc-eu.example.com:80/blah", the patterns "http", "www1", ".www1", "dc-eu", "example", "com", "80", "dc-eu.example", "blah", ":www1:", "dc-eu.example:80" would match, but not "eu" nor "dc". Using it to match domain suffixes for filtering or routing is generally not a good idea, as the routing could easily be fooled by prepending the matching prefix in front of another domain for example. String matching applies to verbatim strings as they are passed, with the exception of the backslash ("\") which makes it possible to escape some characters such as the space. If the "-i" flag is passed before the first string, then the matching will be performed ignoring the case. In order to match the string "-i", either set it second, or pass the "--" flag before the first string. Same applies of course to match the string "--". Do not use string matches for binary fetches which might contain null bytes (0x00), as the comparison stops at the occurrence of the first null byte. Instead, convert the binary fetch to a hex string with the hex converter first. ``` Example: ``` # matches if the string <tag> is present in the binary sample acl tag_found req.payload(0,0),hex -m sub 3C7461673E ``` #### 7.1.4. Matching regular expressions (regexes) ``` Just like with string matching, regex matching applies to verbatim strings as they are passed, with the exception of the backslash ("\") which makes it possible to escape some characters such as the space. If the "-i" flag is passed before the first regex, then the matching will be performed ignoring the case. In order to match the string "-i", either set it second, or pass the "--" flag before the first string. Same principle applies of course to match the string "--". ``` #### 7.1.5. Matching arbitrary data blocks ``` It is possible to match some extracted samples against a binary block which may not safely be represented as a string. For this, the patterns must be passed as a series of hexadecimal digits in an even number, when the match method is set to binary. Each sequence of two digits will represent a byte. The hexadecimal digits may be used upper or lower case. ``` Example : ``` # match "Hello\n" in the input stream (\x48 \x65 \x6c \x6c \x6f \x0a) acl hello req.payload(0,6) -m bin 48656c6c6f0a ``` #### 7.1.6. Matching IPv4 and IPv6 addresses ``` IPv4 addresses values can be specified either as plain addresses or with a netmask appended, in which case the IPv4 address matches whenever it is within the network. Plain addresses may also be replaced with a resolvable host name, but this practice is generally discouraged as it makes it more difficult to read and debug configurations. If hostnames are used, you should at least ensure that they are present in /etc/hosts so that the configuration does not depend on any random DNS match at the moment the configuration is parsed. The dotted IPv4 address notation is supported in both regular as well as the abbreviated form with all-0-octets omitted: +------------------+------------------+------------------+ | Example 1 | Example 2 | Example 3 | +------------------+------------------+------------------+ | 192.168.0.1 | 10.0.0.12 | 127.0.0.1 | | 192.168.1 | 10.12 | 127.1 | | 192.168.0.1/22 | 10.0.0.12/8 | 127.0.0.1/8 | | 192.168.1/22 | 10.12/8 | 127.1/8 | +------------------+------------------+------------------+ Notice that this is different from RFC 4632 CIDR address notation in which 192.168.42/24 would be equivalent to 192.168.42.0/24. IPv6 may be entered in their usual form, with or without a netmask appended. Only bit counts are accepted for IPv6 netmasks. In order to avoid any risk of trouble with randomly resolved IP addresses, host names are never allowed in IPv6 patterns. HAProxy is also able to match IPv4 addresses with IPv6 addresses in the following situations : - tested address is IPv4, pattern address is IPv4, the match applies in IPv4 using the supplied mask if any. - tested address is IPv6, pattern address is IPv6, the match applies in IPv6 using the supplied mask if any. - tested address is IPv6, pattern address is IPv4, the match applies in IPv4 using the pattern's mask if the IPv6 address matches with 2002:IPV4::, ::IPV4 or ::ffff:IPV4, otherwise it fails. - tested address is IPv4, pattern address is IPv6, the IPv4 address is first converted to IPv6 by prefixing ::ffff: in front of it, then the match is applied in IPv6 using the supplied IPv6 mask. ``` ### 7.2. Using ACLs to form conditions ``` Some actions are only performed upon a valid condition. A condition is a combination of ACLs with operators. 3 operators are supported : - AND (implicit) - OR (explicit with the "[or](#or)" keyword or the "||" operator) - Negation with the exclamation mark ("!") A condition is formed as a disjunctive form: [!]acl1 [!]acl2 ... [!]acln { or [!]acl1 [!]acl2 ... [!]acln } ... Such conditions are generally used after an "if" or "unless" statement, indicating when the condition will trigger the action. For instance, to block HTTP requests to the "*" URL with methods other than "OPTIONS", as well as POST requests without content-length, and GET or HEAD requests with a content-length greater than 0, and finally every request which is not either GET/HEAD/POST/OPTIONS ! acl missing_cl req.hdr_cnt(Content-length) eq 0 http-request deny if HTTP_URL_STAR !METH_OPTIONS || METH_POST missing_cl http-request deny if METH_GET HTTP_CONTENT http-request deny unless METH_GET or METH_POST or METH_OPTIONS To select a different backend for requests to static contents on the "www" site and to every request on the "img", "video", "download" and "ftp" hosts : acl url_static path_beg /static /images /img /css acl url_static path_end .gif .png .jpg .css .js acl host_www hdr_beg(host) -i www acl host_static hdr_beg(host) -i img. video. download. ftp. # now use backend "static" for all static-only hosts, and for static URLs # of host "www". Use backend "www" for the rest. use_backend static if host_static or host_www url_static use_backend www if host_www It is also possible to form rules using "anonymous ACLs". Those are unnamed ACL expressions that are built on the fly without needing to be declared. They must be enclosed between braces, with a space before and after each brace (because the braces must be seen as independent words). Example : The following rule : acl missing_cl req.hdr_cnt(Content-length) eq 0 http-request deny if METH_POST missing_cl Can also be written that way : http-request deny if METH_POST { req.hdr_cnt(Content-length) eq 0 } It is generally not recommended to use this construct because it's a lot easier to leave errors in the configuration when written that way. However, for very simple rules matching only one source IP address for instance, it can make more sense to use them than to declare ACLs with random names. Another example of good use is the following : With named ACLs : acl site_dead nbsrv(dynamic) lt 2 acl site_dead nbsrv(static) lt 2 monitor fail if site_dead With anonymous ACLs : monitor fail if { nbsrv(dynamic) lt 2 } || { nbsrv(static) lt 2 } See [section 4.2](#4.2) for detailed help on the "[http-request deny](#http-request%20deny)" and "[use\_backend](#use_backend)" keywords. ``` ### 7.3. Fetching samples ``` Historically, sample fetch methods were only used to retrieve data to match against patterns using ACLs. With the arrival of stick-tables, a new class of sample fetch methods was created, most often sharing the same syntax as their ACL counterpart. These sample fetch methods are also known as "fetches". As of now, ACLs and fetches have converged. All ACL fetch methods have been made available as fetch methods, and ACLs may use any sample fetch method as well. This section details all available sample fetch methods and their output type. Some sample fetch methods have deprecated aliases that are used to maintain compatibility with existing configurations. They are then explicitly marked as deprecated and should not be used in new setups. The ACL derivatives are also indicated when available, with their respective matching methods. These ones all have a well defined default pattern matching method, so it is never necessary (though allowed) to pass the "-m" option to indicate how the sample will be matched using ACLs. As indicated in the sample type versus matching compatibility matrix above, when using a generic sample fetch method in an ACL, the "-m" option is mandatory unless the sample type is one of boolean, integer, IPv4 or IPv6. When the same keyword exists as an ACL keyword and as a standard fetch method, the ACL engine will automatically pick the ACL-only one by default. Some of these keywords support one or multiple mandatory arguments, and one or multiple optional arguments. These arguments are strongly typed and are checked when the configuration is parsed so that there is no risk of running with an incorrect argument (e.g. an unresolved backend name). Fetch function arguments are passed between parenthesis and are delimited by commas. When an argument is optional, it will be indicated below between square brackets ('[ ]'). When all arguments are optional, the parenthesis may be omitted. Thus, the syntax of a standard sample fetch method is one of the following : - [name](#name) - name(arg1) - name(arg1,arg2) ``` #### 7.3.1. Converters ``` Sample fetch methods may be combined with transformations to be applied on top of the fetched sample (also called "converters"). These combinations form what is called "sample expressions" and the result is a "sample". Initially this was only supported by "[stick on](#stick%20on)" and "[stick store-request](#stick%20store-request)" directives but this has now be extended to all places where samples may be used (ACLs, log-format, unique-id-format, add-header, ...). These transformations are enumerated as a series of specific keywords after the sample fetch method. These keywords may equally be appended immediately after the fetch keyword's argument, delimited by a comma. These keywords can also support some arguments (e.g. a netmask) which must be passed in parenthesis. A certain category of converters are bitwise and arithmetic operators which support performing basic operations on integers. Some bitwise operations are supported (and, or, xor, cpl) and some arithmetic operations are supported (add, sub, mul, div, mod, neg). Some comparators are provided (odd, even, not, bool) which make it possible to report a match without having to write an ACL. The currently available list of transformation keywords include : ``` **51d.single**(<prop>[,<prop>\*]) ``` Returns values for the properties requested as a string, where values are separated by the delimiter specified with "[51degrees-property-separator](#51degrees-property-separator)". The device is identified using the User-Agent header passed to the converter. The function can be passed up to five property names, and if a property name can't be found, the value "NoData" is returned. ``` Example : ``` # Here the header "X-51D-DeviceTypeMobileTablet" is added to the request, # containing values for the three properties requested by using the # User-Agent passed to the converter. frontend http-in bind *:8081 default_backend servers http-request set-header X-51D-DeviceTypeMobileTablet \ %[req.fhdr(User-Agent),51d.single(DeviceType,IsMobile,IsTablet)] ``` **add**(<value>) ``` Adds <value> to the input value of type signed integer, and returns the result as a signed integer. <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response) "req" : the variable is shared only during request processing "res" : the variable is shared only during response processing This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **add\_item**(<delim>,[<var>][,<suff>]]) ``` Concatenates a minimum of 2 and up to 3 fields after the current sample which is then turned into a string. The first one, <delim>, is a constant string, that will be appended immediately after the existing sample if an existing sample is not empty and either the <var> or the <suff> is not empty. The second one, <var>, is a variable name. The variable will be looked up, its contents converted to a string, and it will be appended immediately after the <delim> part. If the variable is not found, nothing is appended. It is optional and may optionally be followed by a constant string <suff>, however if <var> is omitted, then <suff> is mandatory. This converter is similar to the concat converter and can be used to build new variables made of a succession of other variables but the main difference is that it does the checks if adding a delimiter makes sense as wouldn't be the case if e.g. the current sample is empty. That situation would require 2 separate rules using concat converter where the first rule would have to check if the current sample string is empty before adding a delimiter. If commas or closing parenthesis are needed as delimiters, they must be protected by quotes or backslashes, themselves protected so that they are not stripped by the first level parser (please see [section 2.2](#2.2) for quoting and escaping). See examples below. ``` Example: ``` http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1,"(site1)") if src,in_table(site1)' http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score2,"(site2)") if src,in_table(site2)' http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score3,"(site3)") if src,in_table(site3)' http-request set-header x-tagged %[var(req.tagged)] http-request set-var(req.tagged) 'var(req.tagged),add_item(",",req.score1),add_item(",",req.score2)' http-request set-var(req.tagged) 'var(req.tagged),add_item(",",,(site1))' if src,in_table(site1) ``` **aes\_gcm\_dec**(<bits>,<nonce>,<key>,<aead\_tag>) ``` Decrypts the raw byte input using the AES128-GCM, AES192-GCM or AES256-GCM algorithm, depending on the <bits> parameter. All other parameters need to be base64 encoded and the returned result is in raw byte format. If the <aead_tag> validation fails, the converter doesn't return any data. The <nonce>, <key> and <aead_tag> can either be strings or variables. This converter requires at least OpenSSL 1.0.1. ``` Example: ``` http-response set-header X-Decrypted-Text %[var(txn.enc),\ aes_gcm_dec(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==,txn.aead_tag)] ``` **and**(<value>) ``` Performs a bitwise "AND" between <value> and the input value of type signed integer, and returns the result as an signed integer. <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response) "req" : the variable is shared only during request processing "res" : the variable is shared only during response processing This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **b64dec** ``` Converts (decodes) a base64 encoded input string to its binary representation. It performs the inverse operation of base64(). For base64url("URL and Filename Safe Alphabet" (RFC 4648)) variant see "[ub64dec](#ub64dec)". ``` **base64** ``` Converts a binary input sample to a base64 string. It is used to log or transfer binary content in a way that can be reliably transferred (e.g. an SSL ID can be copied in a header). For base64url("URL and Filename Safe Alphabet" (RFC 4648)) variant see "[ub64enc](#ub64enc)". ``` **be2dec**(<separator>,<chunk\_size>,[<truncate>]) ``` Converts big-endian binary input sample to a string containing an unsigned integer number per <chunk_size> input bytes. <separator> is put every <chunk_size> binary input bytes if specified. <truncate> flag indicates whatever binary input is truncated at <chunk_size> boundaries. <chunk_size> maximum value is limited by the size of long long int (8 bytes). ``` Example: ``` bin(01020304050607),be2dec(:,2) # 258:772:1286:7 bin(01020304050607),be2dec(-,2,1) # 258-772-1286 bin(01020304050607),be2dec(,2,1) # 2587721286 bin(7f000001),be2dec(.,1) # 127.0.0.1 ``` **be2hex**([<separator>],[<chunk\_size>],[<truncate>]) ``` Converts big-endian binary input sample to a hex string containing two hex digits per input byte. It is used to log or transfer hex dumps of some binary input data in a way that can be reliably transferred (e.g. an SSL ID can be copied in a header). <separator> is put every <chunk_size> binary input bytes if specified. <truncate> flag indicates whatever binary input is truncated at <chunk_size> boundaries. ``` Example: ``` bin(01020304050607),be2hex # 01020304050607 bin(01020304050607),be2hex(:,2) # 0102:0304:0506:07 bin(01020304050607),be2hex(--,2,1) # 0102--0304--0506 bin(0102030405060708),be2hex(,3,1) # 010203040506 ``` **bool** ``` Returns a boolean TRUE if the input value of type signed integer is non-null, otherwise returns FALSE. Used in conjunction with and(), it can be used to report true/false for bit testing on input values (e.g. verify the presence of a flag). ``` **bytes**(<offset>[,<length>]) ``` Extracts some bytes from an input binary sample. The result is a binary sample starting at an offset (in bytes) of the original sample and optionally truncated at the given length. ``` **concat**([<start>],[<var>],[<end>]) ``` Concatenates up to 3 fields after the current sample which is then turned to a string. The first one, <start>, is a constant string, that will be appended immediately after the existing sample. It may be omitted if not used. The second one, <var>, is a variable name. The variable will be looked up, its contents converted to a string, and it will be appended immediately after the <first> part. If the variable is not found, nothing is appended. It may be omitted as well. The third field, <end> is a constant string that will be appended after the variable. It may also be omitted. Together, these elements allow to concatenate variables with delimiters to an existing set of variables. This can be used to build new variables made of a succession of other variables, such as colon-delimited values. If commas or closing parenthesis are needed as delimiters, they must be protected by quotes or backslashes, themselves protected so that they are not stripped by the first level parser. This is often used to build composite variables from other ones, but sometimes using a format string with multiple fields may be more convenient. See examples below. ``` Example: ``` tcp-request session set-var(sess.src) src tcp-request session set-var(sess.dn) ssl_c_s_dn tcp-request session set-var(txn.sig) str(),concat(<ip=,sess.ip,>),concat(<dn=,sess.dn,>) tcp-request session set-var(txn.ipport) "str(),concat('addr=(',sess.ip),concat(',',sess.port,')')" tcp-request session set-var-fmt(txn.ipport) "addr=(%[sess.ip],%[sess.port])" ## does the same http-request set-header x-hap-sig %[var(txn.sig)] ``` **cpl** ``` Takes the input value of type signed integer, applies a ones-complement (flips all bits) and returns the result as an signed integer. ``` **crc32**([<avalanche>]) ``` Hashes a binary input sample into an unsigned 32-bit quantity using the CRC32 hash function. Optionally, it is possible to apply a full avalanche hash function to the output if the optional <avalanche> argument equals 1. This converter uses the same functions as used by the various hash-based load balancing algorithms, so it will provide exactly the same results. It is provided for compatibility with other software which want a CRC32 to be computed on some input keys, so it follows the most common implementation as found in Ethernet, Gzip, PNG, etc... It is slower than the other algorithms but may provide a better or at least less predictable distribution. It must not be used for security purposes as a 32-bit hash is trivial to break. See also "[djb2](#djb2)", "[sdbm](#sdbm)", "[wt6](#wt6)", "[crc32c](#crc32c)" and the "[hash-type](#hash-type)" directive. ``` **crc32c**([<avalanche>]) ``` Hashes a binary input sample into an unsigned 32-bit quantity using the CRC32C hash function. Optionally, it is possible to apply a full avalanche hash function to the output if the optional <avalanche> argument equals 1. This converter uses the same functions as described in RFC4960, Appendix B [8]. It is provided for compatibility with other software which want a CRC32C to be computed on some input keys. It is slower than the other algorithms and it must not be used for security purposes as a 32-bit hash is trivial to break. See also "[djb2](#djb2)", "[sdbm](#sdbm)", "[wt6](#wt6)", "[crc32](#crc32)" and the "[hash-type](#hash-type)" directive. ``` **cut\_crlf** ``` Cuts the string representation of the input sample on the first carriage return ('\r') or newline ('\n') character found. Only the string length is updated. ``` **da-csv-conv**(<prop>[,<prop>\*]) ``` Asks the DeviceAtlas converter to identify the User Agent string passed on input, and to emit a string made of the concatenation of the properties enumerated in argument, delimited by the separator defined by the global keyword "deviceatlas-property-separator", or by default the pipe character ('|'). There's a limit of 12 different properties imposed by the HAProxy configuration language. ``` Example: ``` frontend www bind *:8881 default_backend servers http-request set-header X-DeviceAtlas-Data %[req.fhdr(User-Agent),da-csv(primaryHardwareType,osName,osVersion,browserName,browserVersion,browserRenderingEngine)] ``` **debug**([<prefix][,<destination>]) ``` This converter is used as debug tool. It takes a capture of the input sample and sends it to event sink <destination>, which may designate a ring buffer such as "buf0", as well as "stdout", or "stderr". Available sinks may be checked at run time by issuing "show events" on the CLI. When not specified, the output will be "buf0", which may be consulted via the CLI's "show events" command. An optional prefix <prefix> may be passed to help distinguish outputs from multiple expressions. It will then appear before the colon in the output message. The input sample is passed as-is on the output, so that it is safe to insert the debug converter anywhere in a chain, even with non- printable sample types. ``` Example: ``` tcp-request connection track-sc0 src,debug(track-sc) ``` **digest**(<algorithm>) ``` Converts a binary input sample to a message digest. The result is a binary sample. The <algorithm> must be an OpenSSL message digest name (e.g. sha256). Please note that this converter is only available when HAProxy has been compiled with USE_OPENSSL. ``` **div**(<value>) ``` Divides the input value of type signed integer by <value>, and returns the result as an signed integer. If <value> is null, the largest unsigned integer is returned (typically 2^63-1). <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response) "req" : the variable is shared only during request processing "res" : the variable is shared only during response processing This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **djb2**([<avalanche>]) ``` Hashes a binary input sample into an unsigned 32-bit quantity using the DJB2 hash function. Optionally, it is possible to apply a full avalanche hash function to the output if the optional <avalanche> argument equals 1. This converter uses the same functions as used by the various hash-based load balancing algorithms, so it will provide exactly the same results. It is mostly intended for debugging, but can be used as a stick-table entry to collect rough statistics. It must not be used for security purposes as a 32-bit hash is trivial to break. See also "[crc32](#crc32)", "[sdbm](#sdbm)", "[wt6](#wt6)", "[crc32c](#crc32c)", and the "[hash-type](#hash-type)" directive. ``` **even** ``` Returns a boolean TRUE if the input value of type signed integer is even otherwise returns FALSE. It is functionally equivalent to "not,and(1),bool". ``` **field**(<index>,<delimiters>[,<count>]) ``` Extracts the substring at the given index counting from the beginning (positive index) or from the end (negative index) considering given delimiters from an input string. Indexes start at 1 or -1 and delimiters are a string formatted list of chars. Optionally you can specify <count> of fields to extract (default: 1). Value of 0 indicates extraction of all remaining fields. ``` Example : ``` str(f1_f2_f3__f5),field(5,_) # f5 str(f1_f2_f3__f5),field(2,_,0) # f2\_f3\_\_f5 str(f1_f2_f3__f5),field(2,_,2) # f2\_f3 str(f1_f2_f3__f5),field(-2,_,3) # f2\_f3\_ str(f1_f2_f3__f5),field(-3,_,0) # f1\_f2\_f3 ``` **fix\_is\_valid** ``` Parses a binary payload and performs sanity checks regarding FIX (Financial Information eXchange): - checks that all tag IDs and values are not empty and the tags IDs are well numeric - checks the BeginString tag is the first tag with a valid FIX version - checks the BodyLength tag is the second one with the right body length - checks the MsgType tag is the third tag. - checks that last tag in the message is the CheckSum tag with a valid checksum Due to current HAProxy design, only the first message sent by the client and the server can be parsed. This converter returns a boolean, true if the payload contains a valid FIX message, false if not. See also the fix_tag_value converter. ``` Example: ``` tcp-request inspect-delay 10s tcp-request content reject unless { req.payload(0,0),fix_is_valid } ``` **fix\_tag\_value**(<tag>) ``` Parses a FIX (Financial Information eXchange) message and extracts the value from the tag <tag>. <tag> can be a string or an integer pointing to the desired tag. Any integer value is accepted, but only the following strings are translated into their integer equivalent: BeginString, BodyLength, MsgType, SenderCompID, TargetCompID, CheckSum. More tag names can be easily added. Due to current HAProxy design, only the first message sent by the client and the server can be parsed. No message validation is performed by this converter. It is highly recommended to validate the message first using fix_is_valid converter. See also the fix_is_valid converter. ``` Example: ``` tcp-request inspect-delay 10s tcp-request content reject unless { req.payload(0,0),fix_is_valid } # MsgType tag ID is 35, so both lines below will return the same content tcp-request content set-var(txn.foo) req.payload(0,0),fix_tag_value(35) tcp-request content set-var(txn.bar) req.payload(0,0),fix_tag_value(MsgType) ``` **hex** ``` Converts a binary input sample to a hex string containing two hex digits per input byte. It is used to log or transfer hex dumps of some binary input data in a way that can be reliably transferred (e.g. an SSL ID can be copied in a header). ``` **hex2i** ``` Converts a hex string containing two hex digits per input byte to an integer. If the input value cannot be converted, then zero is returned. ``` **htonl** ``` Converts the input integer value to its 32-bit binary representation in the network byte order. Because sample fetches own signed 64-bit integer, when this converter is used, the input integer value is first casted to an unsigned 32-bit integer. ``` **hmac**(<algorithm>,<key>) ``` Converts a binary input sample to a message authentication code with the given key. The result is a binary sample. The <algorithm> must be one of the registered OpenSSL message digest names (e.g. sha256). The <key> parameter must be base64 encoded and can either be a string or a variable. Please note that this converter is only available when HAProxy has been compiled with USE_OPENSSL. ``` **host\_only** ``` Converts a string which contains a Host header value and removes its port. The input must respect the format of the host header value (rfc9110#section-7.2). It will support that kind of input: hostname, hostname:80, 127.0.0.1, 127.0.0.1:80, [::1], [::1]:80. This converter also sets the string in lowercase. ``` **See also:** "[port\_only](#port_only)" converter which will return the port. **http\_date**([<offset],[<unit>]) ``` Converts an integer supposed to contain a date since epoch to a string representing this date in a format suitable for use in HTTP header fields. If an offset value is specified, then it is added to the date before the conversion is operated. This is particularly useful to emit Date header fields, Expires values in responses when combined with a positive offset, or Last-Modified values when the offset is negative. If a unit value is specified, then consider the timestamp as either "s" for seconds (default behavior), "ms" for milliseconds, or "us" for microseconds since epoch. Offset is assumed to have the same unit as input timestamp. ``` **iif**(<true>,<false>) ``` Returns the <true> string if the input value is true. Returns the <false> string otherwise. ``` Example: ``` http-request set-header x-forwarded-proto %[ssl_fc,iif(https,http)] ``` **in\_table**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, a boolean false is returned. Otherwise a boolean true is returned. This can be used to verify the presence of a certain key in a table tracking some elements (e.g. whether or not a source IP address or an Authorization header was already seen). ``` **ipmask**(<mask4>,[<mask6>]) ``` Apply a mask to an IP address, and use the result for lookups and storage. This can be used to make all hosts within a certain mask to share the same table entries and as such use the same server. The mask4 can be passed in dotted form (e.g. 255.255.255.0) or in CIDR form (e.g. 24). The mask6 can be passed in quadruplet form (e.g. ffff:ffff::) or in CIDR form (e.g. 64). If no mask6 is given IPv6 addresses will fail to convert for backwards compatibility reasons. ``` **json**([<input-code>]) ``` Escapes the input string and produces an ASCII output string ready to use as a JSON string. The converter tries to decode the input string according to the <input-code> parameter. It can be "ascii", "utf8", "utf8s", "utf8p" or "utf8ps". The "ascii" decoder never fails. The "utf8" decoder detects 3 types of errors: - bad UTF-8 sequence (lone continuation byte, bad number of continuation bytes, ...) - invalid range (the decoded value is within a UTF-8 prohibited range), - code overlong (the value is encoded with more bytes than necessary). The UTF-8 JSON encoding can produce a "too long value" error when the UTF-8 character is greater than 0xffff because the JSON string escape specification only authorizes 4 hex digits for the value encoding. The UTF-8 decoder exists in 4 variants designated by a combination of two suffix letters : "p" for "permissive" and "s" for "silently ignore". The behaviors of the decoders are : - "ascii" : never fails; - "utf8" : fails on any detected errors; - "utf8s" : never fails, but removes characters corresponding to errors; - "utf8p" : accepts and fixes the overlong errors, but fails on any other error; - "utf8ps" : never fails, accepts and fixes the overlong errors, but removes characters corresponding to the other errors. This converter is particularly useful for building properly escaped JSON for logging to servers which consume JSON-formatted traffic logs. ``` Example: ``` capture request header Host len 15 capture request header user-agent len 150 log-format '{"ip":"%[src]","user-agent":"%[capture.req.hdr(1),json(utf8s)]"}' ``` ``` Input request from client 127.0.0.1: GET / HTTP/1.0 User-Agent: Very "Ugly" UA 1/2 Output log: {"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"} ``` **json\_query**(<json\_path>,[<output\_type>]) ``` The json_query converter supports the JSON types string, boolean and number. Floating point numbers will be returned as a string. By specifying the output_type 'int' the value will be converted to an Integer. If conversion is not possible the json_query converter fails. <json_path> must be a valid JSON Path string as defined in https://datatracker.ietf.org/doc/draft-ietf-jsonpath-base/ ``` Example: ``` # get a integer value from the request body # "{"integer":4}" => 5 http-request set-var(txn.pay_int) req.body,json_query('$.integer','int'),add(1) # get a key with '.' in the name # {"my.key":"myvalue"} => myvalue http-request set-var(txn.pay_mykey) req.body,json_query('$.my\\.key') # {"boolean-false":false} => 0 http-request set-var(txn.pay_boolean_false) req.body,json_query('$.boolean-false') # get the value of the key 'iss' from a JWT Bearer token http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss') ``` **jwt\_header\_query**([<json\_path>],[<output\_type>]) ``` When given a JSON Web Token (JWT) in input, either returns the decoded header part of the token (the first base64-url encoded part of the JWT) if no parameter is given, or performs a json_query on the decoded header part of the token. See "[json\_query](#json_query)" converter for details about the accepted json_path and output_type parameters. Please note that this converter is only available when HAProxy has been compiled with USE_OPENSSL. ``` **jwt\_payload\_query**([<json\_path>],[<output\_type>]) ``` When given a JSON Web Token (JWT) in input, either returns the decoded payload part of the token (the second base64-url encoded part of the JWT) if no parameter is given, or performs a json_query on the decoded payload part of the token. See "[json\_query](#json_query)" converter for details about the accepted json_path and output_type parameters. Please note that this converter is only available when HAProxy has been compiled with USE_OPENSSL. ``` **jwt\_verify**(<alg>,<key>) ``` Performs a signature verification for the JSON Web Token (JWT) given in input by using the <alg> algorithm and the <key> parameter, which should either hold a secret or a path to a public certificate. Returns 1 in case of verification success, 0 in case of verification error and a strictly negative value for any other error. Because of all those non-null error return values, the result of this converter should never be converted to a boolean. See below for a full list of the possible return values. For now, only JWS tokens using the Compact Serialization format can be processed (three dot-separated base64-url encoded strings). Among the accepted algorithms for a JWS (see [section 3.1](#3.1) of RFC7518), the PSXXX ones are not managed yet. If the used algorithm is of the HMAC family, <key> should be the secret used in the HMAC signature calculation. Otherwise, <key> should be the path to the public certificate that can be used to validate the token's signature. All the certificates that might be used to verify JWTs must be known during init in order to be added into a dedicated certificate cache so that no disk access is required during runtime. For this reason, any used certificate must be mentioned explicitly at least once in a jwt_verify call. Passing an intermediate variable as second parameter is then not advised. This converter only verifies the signature of the token and does not perform a full JWT validation as specified in [section 7.2](#7.2) of RFC7519. We do not ensure that the header and payload contents are fully valid JSON's once decoded for instance, and no checks are performed regarding their respective contents. The possible return values are the following : +----+----------------------------------------------------------------------+ | ID | message | +----+----------------------------------------------------------------------+ | 0 | "Verification failure" | | 1 | "Verification success" | | -1 | "Unknown algorithm (not mentioned in RFC7518)" | | -2 | "Unmanaged algorithm (PSXXX algorithm family)" | | -3 | "Invalid token" | | -4 | "Out of memory" | | -5 | "Unknown certificate" | +----+----------------------------------------------------------------------+ Please note that this converter is only available when HAProxy has been compiled with USE_OPENSSL. ``` Example: ``` # Get a JWT from the authorization header, extract the "alg" field of its # JOSE header and use a public certificate to verify a signature http-request set-var(txn.bearer) http_auth_bearer http-request set-var(txn.jwt_alg) var(txn.bearer),jwt_header_query('$.alg') http-request deny unless { var(txn.jwt_alg) "RS256" } http-request deny unless { var(txn.bearer),jwt_verify(txn.jwt_alg,"/path/to/crt.pem") 1 } ``` **language**(<value>[,<default>]) ``` Returns the value with the highest q-factor from a list as extracted from the "accept-language" header using "[req.fhdr](#req.fhdr)". Values with no q-factor have a q-factor of 1. Values with a q-factor of 0 are dropped. Only values which belong to the list of semi-colon delimited <values> will be considered. The argument <value> syntax is "lang[;lang[;lang[;...]]]". If no value matches the given list and a default value is provided, it is returned. Note that language names may have a variant after a dash ('-'). If this variant is present in the list, it will be matched, but if it is not, only the base language is checked. The match is case-sensitive, and the output string is always one of those provided in arguments. The ordering of arguments is meaningless, only the ordering of the values in the request counts, as the first value among multiple sharing the same q-factor is used. ``` Example : ``` # this configuration switches to the backend matching a # given language based on the request : acl es req.fhdr(accept-language),language(es;fr;en) -m str es acl fr req.fhdr(accept-language),language(es;fr;en) -m str fr acl en req.fhdr(accept-language),language(es;fr;en) -m str en use_backend spanish if es use_backend french if fr use_backend english if en default_backend choose_your_language ``` **length** ``` Get the length of the string. This can only be placed after a string sample fetch function or after a transformation keyword returning a string type. The result is of type integer. ``` **lower** ``` Convert a string sample to lower case. This can only be placed after a string sample fetch function or after a transformation keyword returning a string type. The result is of type string. ``` **ltime**(<format>[,<offset>]) ``` Converts an integer supposed to contain a date since epoch to a string representing this date in local time using a format defined by the <format> string using strftime(3). The purpose is to allow any date format to be used in logs. An optional <offset> in seconds may be applied to the input date (positive or negative). See the strftime() man page for the format supported by your operating system. See also the utime converter. ``` Example : ``` # Emit two colons, one with the local time and another with ip:port # e.g. 20140710162350 127.0.0.1:57325 log-format %[date,ltime(%Y%m%d%H%M%S)]\ %ci:%cp ``` **ltrim**(<chars>) ``` Skips any characters from <chars> from the beginning of the string representation of the input sample. ``` **map**(<map\_file>[,<default\_value>]) ``` map_<match_type>(<map_file>[,<default_value>]) map_<match_type>_<output_type>(<map_file>[,<default_value>]) Search the input value from <map_file> using the <match_type> matching method, and return the associated value converted to the type <output_type>. If the input value cannot be found in the <map_file>, the converter returns the <default_value>. If the <default_value> is not set, the converter fails and acts as if no input value could be fetched. If the <match_type> is not set, it defaults to "[str](#str)". Likewise, if the <output_type> is not set, it defaults to "[str](#str)". For convenience, the "[map](#map)" keyword is an alias for "map_str" and maps a string to another string. It is important to avoid overlapping between the keys : IP addresses and strings are stored in trees, so the first of the finest match will be used. Other keys are stored in lists, so the first matching occurrence will be used. The following array contains the list of all map functions available sorted by input type, match type and output type. ``` | input type | match method | output type str | output type int | output type ip | | --- | --- | --- | --- | --- | | str | str | map\_str | map\_str\_int | map\_str\_ip | | str | beg | map\_beg | map\_beg\_int | map\_end\_ip | | str | sub | map\_sub | map\_sub\_int | map\_sub\_ip | | str | dir | map\_dir | map\_dir\_int | map\_dir\_ip | | str | dom | map\_dom | map\_dom\_int | map\_dom\_ip | | str | end | map\_end | map\_end\_int | map\_end\_ip | | str | reg | map\_reg | map\_reg\_int | map\_reg\_ip | | str | reg | map\_regm | map\_reg\_int | map\_reg\_ip | | int | int | map\_int | map\_int\_int | map\_int\_ip | | ip | ip | map\_ip | map\_ip\_int | map\_ip\_ip | ``` The special map called "map_regm" expect matching zone in the regular expression and modify the output replacing back reference (like "\1") by the corresponding match text. The file contains one key + value per line. Lines which start with '#' are ignored, just like empty lines. Leading tabs and spaces are stripped. The key is then the first "[word](#word)" (series of non-space/tabs characters), and the value is what follows this series of space/tab till the end of the line excluding trailing spaces/tabs. ``` Example : ``` # this is a comment and is ignored 2.22.246.0/23 United Kingdom \n <-><-----------><--><------------><----> | | | | `- trailing spaces ignored | | | `---------- value | | `-------------------- middle spaces ignored | `---------------------------- key `------------------------------------ leading spaces ignored ``` **mod**(<value>) ``` Divides the input value of type signed integer by <value>, and returns the remainder as an signed integer. If <value> is null, then zero is returned. <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response) "req" : the variable is shared only during request processing "res" : the variable is shared only during response processing This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **mqtt\_field\_value**(<packettype>,<fieldname\_or\_property\_ID>) ``` Returns value of <fieldname> found in input MQTT payload of type <packettype>. <packettype> can be either a string (case insensitive matching) or a numeric value corresponding to the type of packet we're supposed to extract data from. Supported string and integers can be found here: https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718021 https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901022 <fieldname> depends on <packettype> and can be any of the following below. (note that <fieldname> matching is case insensitive). <property id> can only be found in MQTT v5.0 streams. check this table: https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901029 - CONNECT (or 1): flags, protocol_name, protocol_version, client_identifier, will_topic, will_payload, username, password, keepalive OR any property ID as a numeric value (for MQTT v5.0 packets only): 17: Session Expiry Interval 33: Receive Maximum 39: Maximum Packet Size 34: Topic Alias Maximum 25: Request Response Information 23: Request Problem Information 21: Authentication Method 22: Authentication Data 18: Will Delay Interval 1: Payload Format Indicator 2: Message Expiry Interval 3: Content Type 8: Response Topic 9: Correlation Data Not supported yet: 38: User Property - CONNACK (or 2): flags, protocol_version, reason_code OR any property ID as a numeric value (for MQTT v5.0 packets only): 17: Session Expiry Interval 33: Receive Maximum 36: Maximum QoS 37: Retain Available 39: Maximum Packet Size 18: Assigned Client Identifier 34: Topic Alias Maximum 31: Reason String 40; Wildcard Subscription Available 41: Subscription Identifiers Available 42: Shared Subscription Available 19: Server Keep Alive 26: Response Information 28: Server Reference 21: Authentication Method 22: Authentication Data Not supported yet: 38: User Property Due to current HAProxy design, only the first message sent by the client and the server can be parsed. Thus this converter can extract data only from CONNECT and CONNACK packet types. CONNECT is the first message sent by the client and CONNACK is the first response sent by the server. ``` Example: ``` acl data_in_buffer req.len ge 4 tcp-request content set-var(txn.username) \ req.payload(0,0),mqtt_field_value(connect,protocol_name) \ if data_in_buffer # do the same as above tcp-request content set-var(txn.username) \ req.payload(0,0),mqtt_field_value(1,protocol_name) \ if data_in_buffer ``` **mqtt\_is\_valid** ``` Checks that the binary input is a valid MQTT packet. It returns a boolean. Due to current HAProxy design, only the first message sent by the client and the server can be parsed. Thus this converter can extract data only from CONNECT and CONNACK packet types. CONNECT is the first message sent by the client and CONNACK is the first response sent by the server. Only MQTT 3.1, 3.1.1 and 5.0 are supported. ``` Example: ``` acl data_in_buffer req.len ge 4 tcp-request content reject unless { req.payload(0,0),mqtt_is_valid } ``` **mul**(<value>) ``` Multiplies the input value of type signed integer by <value>, and returns the product as an signed integer. In case of overflow, the largest possible value for the sign is returned so that the operation doesn't wrap around. <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response) "req" : the variable is shared only during request processing "res" : the variable is shared only during response processing This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **nbsrv** ``` Takes an input value of type string, interprets it as a backend name and returns the number of usable servers in that backend. Can be used in places where we want to look up a backend from a dynamic name, like a result of a map lookup. ``` **neg** ``` Takes the input value of type signed integer, computes the opposite value, and returns the remainder as an signed integer. 0 is identity. This operator is provided for reversed subtracts : in order to subtract the input from a constant, simply perform a "neg,add(value)". ``` **not** ``` Returns a boolean FALSE if the input value of type signed integer is non-null, otherwise returns TRUE. Used in conjunction with and(), it can be used to report true/false for bit testing on input values (e.g. verify the absence of a flag). ``` **odd** ``` Returns a boolean TRUE if the input value of type signed integer is odd otherwise returns FALSE. It is functionally equivalent to "and(1),bool". ``` **or**(<value>) ``` Performs a bitwise "OR" between <value> and the input value of type signed integer, and returns the result as an signed integer. <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response) "req" : the variable is shared only during request processing "res" : the variable is shared only during response processing This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **port\_only** ``` Converts a string which contains a Host header value into an integer by returning its port. The input must respect the format of the host header value (rfc9110#section-7.2). It will support that kind of input: hostname, hostname:80, 127.0.0.1, 127.0.0.1:80, [::1], [::1]:80. If no port were provided in the input, it will return 0. ``` **See also:** "[host\_only](#host_only)" converter which will return the host. **protobuf**(<field\_number>,[<field\_type>]) ``` This extracts the protocol buffers message field in raw mode of an input binary sample representation of a protocol buffer message with <field_number> as field number (dotted notation) if <field_type> is not present, or as an integer sample if this field is present (see also "[ungrpc](#ungrpc)" below). The list of the authorized types is the following one: "int32", "int64", "uint32", "uint64", "sint32", "sint64", "bool", "enum" for the "varint" wire type 0 "fixed64", "sfixed64", "double" for the 64bit wire type 1, "fixed32", "sfixed32", "float" for the wire type 5. Note that "string" is considered as a length-delimited type, so it does not require any <field_type> argument to be extracted. More information may be found here about the protocol buffers message field types: https://developers.google.com/protocol-buffers/docs/encoding ``` **regsub**(<regex>,<subst>[,<flags>]) ``` Applies a regex-based substitution to the input string. It does the same operation as the well-known "sed" utility with "s/<regex>/<subst>/". By default it will replace in the input string the first occurrence of the largest part matching the regular expression <regex> with the substitution string <subst>. It is possible to replace all occurrences instead by adding the flag "g" in the third argument <flags>. It is also possible to make the regex case insensitive by adding the flag "i" in <flags>. Since <flags> is a string, it is made up from the concatenation of all desired flags. Thus if both "i" and "g" are desired, using "gi" or "ig" will have the same effect. The first use of this converter is to replace certain characters or sequence of characters with other ones. It is highly recommended to enclose the regex part using protected quotes to improve clarity and never have a closing parenthesis from the regex mixed up with the parenthesis from the function. Just like in Bourne shell, the first level of quotes is processed when delimiting word groups on the line, a second level is usable for argument. It is recommended to use single quotes outside since these ones do not try to resolve backslashes nor dollar signs. ``` Examples: ``` # de-duplicate "/" in header "x-path". # input: x-path: /////a///b/c/xzxyz/ # output: x-path: /a/b/c/xzxyz/ http-request set-header x-path "%[hdr(x-path),regsub('/+','/','g')]" # copy query string to x-query and drop all leading '?', ';' and '&' http-request set-header x-query "%[query,regsub([?;&]*,'')]" # capture groups and backreferences # both lines do the same. http-request redirect location %[url,'regsub("(foo|bar)([0-9]+)?","\2\1",i)'] http-request redirect location %[url,regsub(\"(foo|bar)([0-9]+)?\",\"\2\1\",i)] ``` **capture-req**(<id>) ``` Capture the string entry in the request slot <id> and returns the entry as is. If the slot doesn't exist, the capture fails silently. ``` **See also:** "[declare capture](#declare%20capture)", "[http-request capture](#http-request%20capture)", "[http-response capture](#http-response%20capture)", "[capture.req.hdr](#capture.req.hdr)" and "[capture.res.hdr](#capture.res.hdr)" (sample fetches). **capture-res**(<id>) ``` Capture the string entry in the response slot <id> and returns the entry as is. If the slot doesn't exist, the capture fails silently. ``` **See also:** "[declare capture](#declare%20capture)", "[http-request capture](#http-request%20capture)", "[http-response capture](#http-response%20capture)", "[capture.req.hdr](#capture.req.hdr)" and "[capture.res.hdr](#capture.res.hdr)" (sample fetches). **rtrim**(<chars>) ``` Skips any characters from <chars> from the end of the string representation of the input sample. ``` **sdbm**([<avalanche>]) ``` Hashes a binary input sample into an unsigned 32-bit quantity using the SDBM hash function. Optionally, it is possible to apply a full avalanche hash function to the output if the optional <avalanche> argument equals 1. This converter uses the same functions as used by the various hash-based load balancing algorithms, so it will provide exactly the same results. It is mostly intended for debugging, but can be used as a stick-table entry to collect rough statistics. It must not be used for security purposes as a 32-bit hash is trivial to break. See also "[crc32](#crc32)", "[djb2](#djb2)", "[wt6](#wt6)", "[crc32c](#crc32c)", and the "[hash-type](#hash-type)" directive. ``` **secure\_memcmp**(<var>) ``` Compares the contents of <var> with the input value. Both values are treated as a binary string. Returns a boolean indicating whether both binary strings match. If both binary strings have the same length then the comparison will be performed in constant time. Please note that this converter is only available when HAProxy has been compiled with USE_OPENSSL. ``` Example : ``` http-request set-var(txn.token) hdr(token) # Check whether the token sent by the client matches the secret token # value, without leaking the contents using a timing attack. acl token_given str(my_secret_token),secure_memcmp(txn.token) ``` ``` set-var(<var>[,<cond> ...]) Sets a variable with the input content and returns the content on the output as-is if all of the specified conditions are true (see below for a list of possible conditions). The variable keeps the value and the associated input type. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response), "req" : the variable is shared only during request processing, "res" : the variable is shared only during response processing. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. You can pass at most four conditions to the converter among the following possible conditions : - "ifexists"/"ifnotexists": Checks if the variable already existed before the current set-var call. A variable is usually created through a successful set-var call. Note that variables of scope "[proc](#proc)" are created during configuration parsing so the "ifexists" condition will always be true for them. - "ifempty"/"ifnotempty": Checks if the input is empty or not. Scalar types are never empty so the ifempty condition will be false for them regardless of the input's contents (integers, booleans, IPs ...). - "ifset"/"ifnotset": Checks if the variable was previously set or not, or if unset-var was called on the variable. A variable that does not exist yet is considered as not set. A "[proc](#proc)" variable can exist while not being set since they are created during configuration parsing. - "ifgt"/"iflt": Checks if the content of the variable is "greater than" or "less than" the input. This check can only be performed if both the input and the variable are of type integer. Otherwise, the check is considered as true by default. ``` **sha1** ``` Converts a binary input sample to a SHA-1 digest. The result is a binary sample with length of 20 bytes. ``` **sha2**([<bits>]) ``` Converts a binary input sample to a digest in the SHA-2 family. The result is a binary sample with length of <bits>/8 bytes. Valid values for <bits> are 224, 256, 384, 512, each corresponding to SHA-<bits>. The default value is 256. Please note that this converter is only available when HAProxy has been compiled with USE_OPENSSL. ``` **srv\_queue** ``` Takes an input value of type string, either a server name or <backend>/<server> format and returns the number of queued sessions on that server. Can be used in places where we want to look up queued sessions from a dynamic name, like a cookie value (e.g. req.cook(SRVID),srv_queue) and then make a decision to break persistence or direct a request elsewhere. ``` **strcmp**(<var>) ``` Compares the contents of <var> with the input value of type string. Returns the result as a signed integer compatible with strcmp(3): 0 if both strings are identical. A value less than 0 if the left string is lexicographically smaller than the right string or if the left string is shorter. A value greater than 0 otherwise (right string greater than left string or the right string is shorter). See also the secure_memcmp converter if you need to compare two binary strings in constant time. ``` Example : ``` http-request set-var(txn.host) hdr(host) # Check whether the client is attempting domain fronting. acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0 ``` **sub**(<value>) ``` Subtracts <value> from the input value of type signed integer, and returns the result as an signed integer. Note: in order to subtract the input from a constant, simply perform a "neg,add(value)". <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response), "req" : the variable is shared only during request processing, "res" : the variable is shared only during response processing. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **table\_bytes\_in\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the average client-to-server bytes rate associated with the input sample in the designated table, measured in amount of bytes over the period configured in the table. See also the sc_bytes_in_rate sample fetch keyword. ``` **table\_bytes\_out\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the average server-to-client bytes rate associated with the input sample in the designated table, measured in amount of bytes over the period configured in the table. See also the sc_bytes_out_rate sample fetch keyword. ``` **table\_conn\_cnt**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the cumulative number of incoming connections associated with the input sample in the designated table. See also the sc_conn_cnt sample fetch keyword. ``` **table\_conn\_cur**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the current amount of concurrent tracked connections associated with the input sample in the designated table. See also the sc_conn_cur sample fetch keyword. ``` **table\_conn\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the average incoming connection rate associated with the input sample in the designated table. See also the sc_conn_rate sample fetch keyword. ``` **table\_expire**(<table>[,<default\_value>]) ``` Uses the input sample to perform a look up in the specified table. If the key is not found in the table, the converter fails except if <default_value> is set: this makes the converter succeed and return <default_value>. If the key is found the converter returns the key expiration delay associated with the input sample in the designated table. See also the table_idle sample fetch keyword. ``` **table\_gpt**(<idx>,<table>) ``` Uses the string representation of the input sample to perform a lookup in the specified table. If the key is not found in the table, boolean value zero is returned. Otherwise the converter returns the current value of the general purpose tag at the index <idx> of the array associated to the input sample in the designated <table>. <idx> is an integer between 0 and 99. If there is no GPT stored at this index, it also returns the boolean value 0. This applies only to the 'gpt' array data_type (and not on the legacy 'gpt0' data-type). See also the sc_get_gpt sample fetch keyword. ``` **table\_gpt0**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, boolean value zero is returned. Otherwise the converter returns the current value of the first general purpose tag associated with the input sample in the designated table. See also the sc_get_gpt0 sample fetch keyword. ``` **table\_gpc**(<idx>,<table>) ``` Uses the string representation of the input sample to perform a lookup in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the current value of the General Purpose Counter at the index <idx> of the array associated to the input sample in the designated <table>. <idx> is an integer between 0 and 99. If there is no GPC stored at this index, it also returns the boolean value 0. This applies only to the 'gpc' array data_type (and not to the legacy 'gpc0' nor 'gpc1' data_types). See also the sc_get_gpc sample fetch keyword. ``` **table\_gpc\_rate**(<idx>,<table>) ``` Uses the string representation of the input sample to perform a lookup in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the frequency which the Global Purpose Counter at index <idx> of the array (associated to the input sample in the designated stick-table <table>) was incremented over the configured period. <idx> is an integer between 0 and 99. If there is no gpc_rate stored at this index, it also returns the boolean value 0. This applies only to the 'gpc_rate' array data_type (and not to the legacy 'gpc0_rate' nor 'gpc1_rate' data_types). See also the sc_gpc_rate sample fetch keyword. ``` **table\_gpc0**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the current value of the first general purpose counter associated with the input sample in the designated table. See also the sc_get_gpc0 sample fetch keyword. ``` **table\_gpc0\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the frequency which the gpc0 counter was incremented over the configured period in the table, associated with the input sample in the designated table. See also the sc_get_gpc0_rate sample fetch keyword. ``` **table\_gpc1**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the current value of the second general purpose counter associated with the input sample in the designated table. See also the sc_get_gpc1 sample fetch keyword. ``` **table\_gpc1\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the frequency which the gpc1 counter was incremented over the configured period in the table, associated with the input sample in the designated table. See also the sc_get_gpc1_rate sample fetch keyword. ``` **table\_http\_err\_cnt**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the cumulative number of HTTP errors associated with the input sample in the designated table. See also the sc_http_err_cnt sample fetch keyword. ``` **table\_http\_err\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the average rate of HTTP errors associated with the input sample in the designated table, measured in amount of errors over the period configured in the table. See also the sc_http_err_rate sample fetch keyword. ``` **table\_http\_fail\_cnt**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the cumulative number of HTTP failures associated with the input sample in the designated table. See also the sc_http_fail_cnt sample fetch keyword. ``` **table\_http\_fail\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the average rate of HTTP failures associated with the input sample in the designated table, measured in amount of failures over the period configured in the table. See also the sc_http_fail_rate sample fetch keyword. ``` **table\_http\_req\_cnt**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the cumulative number of HTTP requests associated with the input sample in the designated table. See also the sc_http_req_cnt sample fetch keyword. ``` **table\_http\_req\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the average rate of HTTP requests associated with the input sample in the designated table, measured in amount of requests over the period configured in the table. See also the sc_http_req_rate sample fetch keyword. ``` **table\_idle**(<table>[,<default\_value>]) ``` Uses the input sample to perform a look up in the specified table. If the key is not found in the table, the converter fails except if <default_value> is set: this makes the converter succeed and return <default_value>. If the key is found the converter returns the time the key entry associated with the input sample in the designated table remained idle since the last time it was updated. See also the table_expire sample fetch keyword. ``` **table\_kbytes\_in**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the cumulative number of client- to-server data associated with the input sample in the designated table, measured in kilobytes. The test is currently performed on 32-bit integers, which limits values to 4 terabytes. See also the sc_kbytes_in sample fetch keyword. ``` **table\_kbytes\_out**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the cumulative number of server- to-client data associated with the input sample in the designated table, measured in kilobytes. The test is currently performed on 32-bit integers, which limits values to 4 terabytes. See also the sc_kbytes_out sample fetch keyword. ``` **table\_server\_id**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the server ID associated with the input sample in the designated table. A server ID is associated to a sample by a "stick" rule when a connection to a server succeeds. A server ID zero means that no server is associated with this key. ``` **table\_sess\_cnt**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the cumulative number of incoming sessions associated with the input sample in the designated table. Note that a session here refers to an incoming connection being accepted by the "[tcp-request connection](#tcp-request%20connection)" rulesets. See also the sc_sess_cnt sample fetch keyword. ``` **table\_sess\_rate**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the average incoming session rate associated with the input sample in the designated table. Note that a session here refers to an incoming connection being accepted by the "[tcp-request connection](#tcp-request%20connection)" rulesets. See also the sc_sess_rate sample fetch keyword. ``` **table\_trackers**(<table>) ``` Uses the string representation of the input sample to perform a look up in the specified table. If the key is not found in the table, integer value zero is returned. Otherwise the converter returns the current amount of concurrent connections tracking the same key as the input sample in the designated table. It differs from table_conn_cur in that it does not rely on any stored information but on the table's reference count (the "use" value which is returned by "show table" on the CLI). This may sometimes be more suited for layer7 tracking. It can be used to tell a server how many concurrent connections there are from a given address for example. See also the sc_trackers sample fetch keyword. ``` **ub64dec** ``` This converter is the base64url variant of b64dec converter. base64url encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding. It is also the encoding used in JWT (JSON Web Token) standard. ``` Example: ``` # Decoding a JWT payload: http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec ``` **ub64enc** ``` This converter is the base64url variant of base64 converter. ``` **upper** ``` Convert a string sample to upper case. This can only be placed after a string sample fetch function or after a transformation keyword returning a string type. The result is of type string. ``` **url\_dec**([<in\_form>]) ``` Takes an url-encoded string provided as input and returns the decoded version as output. The input and the output are of type string. If the <in_form> argument is set to a non-zero integer value, the input string is assumed to be part of a form or query string and the '+' character will be turned into a space (' '). Otherwise this will only happen after a question mark indicating a query string ('?'). ``` **url\_enc**([<enc\_type>]) ``` Takes a string provided as input and returns the encoded version as output. The input and the output are of type string. By default the type of encoding is meant for `query` type. There is no other type supported for now but the optional argument is here for future changes. ``` **ungrpc**(<field\_number>,[<field\_type>]) ``` This extracts the protocol buffers message field in raw mode of an input binary sample representation of a gRPC message with <field_number> as field number (dotted notation) if <field_type> is not present, or as an integer sample if this field is present. The list of the authorized types is the following one: "int32", "int64", "uint32", "uint64", "sint32", "sint64", "bool", "enum" for the "varint" wire type 0 "fixed64", "sfixed64", "double" for the 64bit wire type 1, "fixed32", "sfixed32", "float" for the wire type 5. Note that "string" is considered as a length-delimited type, so it does not require any <field_type> argument to be extracted. More information may be found here about the protocol buffers message field types: https://developers.google.com/protocol-buffers/docs/encoding ``` Example: ``` // with such a protocol buffer .proto file content adapted from // https://github.com/grpc/grpc/blob/master/examples/protos/route_guide.proto message Point { int32 latitude = 1; int32 longitude = 2; } message PPoint { Point point = 59; } message Rectangle { // One corner of the rectangle. PPoint lo = 48; // The other corner of the rectangle. PPoint hi = 49; } ``` ``` let's say a body request is made of a "Rectangle" object value (two PPoint protocol buffers messages), the four protocol buffers fields could be extracted with these "[ungrpc](#ungrpc)" directives: req.body,ungrpc(48.59.1,int32) # "latitude" of "lo" first PPoint req.body,ungrpc(48.59.2,int32) # "longitude" of "lo" first PPoint req.body,ungrpc(49.59.1,int32) # "latitude" of "hi" second PPoint req.body,ungrpc(49.59.2,int32) # "longitude" of "hi" second PPoint We could also extract the intermediary 48.59 field as a binary sample as follows: req.body,ungrpc(48.59) As a gRPC message is always made of a gRPC header followed by protocol buffers messages, in the previous example the "latitude" of "lo" first PPoint could be extracted with these equivalent directives: req.body,ungrpc(48.59),protobuf(1,int32) req.body,ungrpc(48),protobuf(59.1,int32) req.body,ungrpc(48),protobuf(59),protobuf(1,int32) Note that the first convert must be "[ungrpc](#ungrpc)", the remaining ones must be "[protobuf](#protobuf)" and only the last one may have or not a second argument to interpret the previous binary sample. ``` **unset-var**(<var>) ``` Unsets a variable if the input content is defined. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response), "req" : the variable is shared only during request processing, "res" : the variable is shared only during response processing. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **utime**(<format>[,<offset>]) ``` Converts an integer supposed to contain a date since epoch to a string representing this date in UTC time using a format defined by the <format> string using strftime(3). The purpose is to allow any date format to be used in logs. An optional <offset> in seconds may be applied to the input date (positive or negative). See the strftime() man page for the format supported by your operating system. See also the ltime converter. ``` Example : ``` # Emit two colons, one with the UTC time and another with ip:port # e.g. 20140710162350 127.0.0.1:57325 log-format %[date,utime(%Y%m%d%H%M%S)]\ %ci:%cp ``` **word**(<index>,<delimiters>[,<count>]) ``` Extracts the nth word counting from the beginning (positive index) or from the end (negative index) considering given delimiters from an input string. Indexes start at 1 or -1 and delimiters are a string formatted list of chars. Delimiters at the beginning or end of the input string are ignored. Optionally you can specify <count> of words to extract (default: 1). Value of 0 indicates extraction of all remaining words. ``` Example : ``` str(f1_f2_f3__f5),word(4,_) # f5 str(f1_f2_f3__f5),word(2,_,0) # f2\_f3\_\_f5 str(f1_f2_f3__f5),word(3,_,2) # f3\_\_f5 str(f1_f2_f3__f5),word(-2,_,3) # f1\_f2\_f3 str(f1_f2_f3__f5),word(-3,_,0) # f1\_f2 str(/f1/f2/f3/f4),word(1,/) # f1 ``` **wt6**([<avalanche>]) ``` Hashes a binary input sample into an unsigned 32-bit quantity using the WT6 hash function. Optionally, it is possible to apply a full avalanche hash function to the output if the optional <avalanche> argument equals 1. This converter uses the same functions as used by the various hash-based load balancing algorithms, so it will provide exactly the same results. It is mostly intended for debugging, but can be used as a stick-table entry to collect rough statistics. It must not be used for security purposes as a 32-bit hash is trivial to break. See also "[crc32](#crc32)", "[djb2](#djb2)", "[sdbm](#sdbm)", "[crc32c](#crc32c)", and the "[hash-type](#hash-type)" directive. ``` **xor**(<value>) ``` Performs a bitwise "XOR" (exclusive OR) between <value> and the input value of type signed integer, and returns the result as an signed integer. <value> can be a numeric value or a variable name. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response), "req" : the variable is shared only during request processing, "res" : the variable is shared only during response processing. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` **xxh3**([<seed>]) ``` Hashes a binary input sample into a signed 64-bit quantity using the XXH3 64-bit variant of the XXhash hash function. This hash supports a seed which defaults to zero but a different value maybe passed as the <seed> argument. This hash is known to be very good and very fast so it can be used to hash URLs and/or URL parameters for use as stick-table keys to collect statistics with a low collision rate, though care must be taken as the algorithm is not considered as cryptographically secure. ``` **xxh32**([<seed>]) ``` Hashes a binary input sample into an unsigned 32-bit quantity using the 32-bit variant of the XXHash hash function. This hash supports a seed which defaults to zero but a different value maybe passed as the <seed> argument. This hash is known to be very good and very fast so it can be used to hash URLs and/or URL parameters for use as stick-table keys to collect statistics with a low collision rate, though care must be taken as the algorithm is not considered as cryptographically secure. ``` **xxh64**([<seed>]) ``` Hashes a binary input sample into a signed 64-bit quantity using the 64-bit variant of the XXHash hash function. This hash supports a seed which defaults to zero but a different value maybe passed as the <seed> argument. This hash is known to be very good and very fast so it can be used to hash URLs and/or URL parameters for use as stick-table keys to collect statistics with a low collision rate, though care must be taken as the algorithm is not considered as cryptographically secure. ``` **x509\_v\_err\_str** ``` Convert a numerical value to its corresponding X509_V_ERR constant name. It is useful in ACL in order to have a configuration which works with multiple version of OpenSSL since some codes might change when changing version. The list of constant provided by OpenSSL can be found at https://www.openssl.org/docs/manmaster/man3/X509_STORE_CTX_get_error.html#ERROR-CODES Be careful to read the page for the right version of OpenSSL. ``` Example: ``` bind :443 ssl crt common.pem ca-file ca-auth.crt verify optional crt-ignore-err X509_V_ERR_CERT_REVOKED,X509_V_ERR_CERT_HAS_EXPIRED acl cert_expired ssl_c_verify,x509_v_err_str -m str X509_V_ERR_CERT_HAS_EXPIRED acl cert_revoked ssl_c_verify,x509_v_err_str -m str X509_V_ERR_CERT_REVOKED acl cert_ok ssl_c_verify,x509_v_err_str -m str X509_V_OK http-response add-header X-SSL Ok if cert_ok http-response add-header X-SSL Expired if cert_expired http-response add-header X-SSL Revoked if cert_revoked ``` #### 7.3.2. Fetching samples from internal states ``` A first set of sample fetch methods applies to internal information which does not even relate to any client information. These ones are sometimes used with "monitor-fail" directives to report an internal status to external watchers. The sample fetch methods described in this section are usable anywhere. ``` **always\_false** : boolean ``` Always returns the boolean "false" value. It may be used with ACLs as a temporary replacement for another one when adjusting configurations. ``` **always\_true** : boolean ``` Always returns the boolean "true" value. It may be used with ACLs as a temporary replacement for another one when adjusting configurations. ``` **avg\_queue**([<backend>]) : integer ``` Returns the total number of queued connections of the designated backend divided by the number of active servers. The current backend is used if no backend is specified. This is very similar to "[queue](#queue)" except that the size of the farm is considered, in order to give a more accurate measurement of the time it may take for a new connection to be processed. The main usage is with ACL to return a sorry page to new users when it becomes certain they will get a degraded service, or to pass to the backend servers in a header so that they decide to work in degraded mode or to disable some functions to speed up the processing a bit. Note that in the event there would not be any active server anymore, twice the number of queued connections would be considered as the measured value. This is a fair estimate, as we expect one server to get back soon anyway, but we still prefer to send new traffic to another backend if in better shape. See also the "[queue](#queue)", "[be\_conn](#be_conn)", and "[be\_sess\_rate](#be_sess_rate)" sample fetches. ``` **be\_conn**([<backend>]) : integer ``` Applies to the number of currently established connections on the backend, possibly including the connection being evaluated. If no backend name is specified, the current one is used. But it is also possible to check another backend. It can be used to use a specific farm when the nominal one is full. See also the "[fe\_conn](#fe_conn)", "[queue](#queue)", "[be\_conn\_free](#be_conn_free)", and "[be\_sess\_rate](#be_sess_rate)" criteria. ``` **be\_conn\_free**([<backend>]) : integer ``` Returns an integer value corresponding to the number of available connections across available servers in the backend. Queue slots are not included. Backup servers are also not included, unless all other servers are down. If no backend name is specified, the current one is used. But it is also possible to check another backend. It can be used to use a specific farm when the nominal one is full. See also the "[be\_conn](#be_conn)", "[connslots](#connslots)", and "[srv\_conn\_free](#srv_conn_free)" criteria. OTHER CAVEATS AND NOTES: if any of the server maxconn, or maxqueue is 0 (meaning unlimited), then this fetch clearly does not make sense, in which case the value returned will be -1. ``` **be\_sess\_rate**([<backend>]) : integer ``` Returns an integer value corresponding to the sessions creation rate on the backend, in number of new sessions per second. This is used with ACLs to switch to an alternate backend when an expensive or fragile one reaches too high a session rate, or to limit abuse of service (e.g. prevent sucking of an online dictionary). It can also be useful to add this element to logs using a log-format directive. ``` Example : ``` # Redirect to an error page if the dictionary is requested too often backend dynamic mode http acl being_scanned be_sess_rate gt 100 redirect location /denied.html if being_scanned ``` **bin**(<hex>) : bin ``` Returns a binary chain. The input is the hexadecimal representation of the string. ``` **bool**(<bool>) : bool ``` Returns a boolean value. <bool> can be 'true', 'false', '1' or '0'. 'false' and '0' are the same. 'true' and '1' are the same. ``` **connslots**([<backend>]) : integer ``` Returns an integer value corresponding to the number of connection slots still available in the backend, by totaling the maximum amount of connections on all servers and the maximum queue size. This is probably only used with ACLs. The basic idea here is to be able to measure the number of connection "slots" still available (connection + queue), so that anything beyond that (intended usage; see "[use\_backend](#use_backend)" keyword) can be redirected to a different backend. 'connslots' = number of available server connection slots, + number of available server queue slots. Note that while "[fe\_conn](#fe_conn)" may be used, "[connslots](#connslots)" comes in especially useful when you have a case of traffic going to one single ip, splitting into multiple backends (perhaps using ACLs to do name-based load balancing) and you want to be able to differentiate between different backends, and their available "[connslots](#connslots)". Also, whereas "nbsrv" only measures servers that are actually *down*, this fetch is more fine-grained and looks into the number of available connection slots as well. See also "[queue](#queue)" and "[avg\_queue](#avg_queue)". OTHER CAVEATS AND NOTES: at this point in time, the code does not take care of dynamic connections. Also, if any of the server maxconn, or maxqueue is 0, then this fetch clearly does not make sense, in which case the value returned will be -1. ``` **cpu\_calls** : integer ``` Returns the number of calls to the task processing the stream or current request since it was allocated. This number is reset for each new request on the same connections in case of HTTP keep-alive. This value should usually be low and stable (around 2 calls for a typically simple request) but may become high if some processing (compression, caching or analysis) is performed. This is purely for performance monitoring purposes. ``` **cpu\_ns\_avg** : integer ``` Returns the average number of nanoseconds spent in each call to the task processing the stream or current request. This number is reset for each new request on the same connections in case of HTTP keep-alive. This value indicates the overall cost of processing the request or the connection for each call. There is no good nor bad value but the time spent in a call automatically causes latency for other processing (see lat_ns_avg below), and may affect other connection's apparent response time. Certain operations like compression, complex regex matching or heavy Lua operations may directly affect this value, and having it in the logs will make it easier to spot the faulty processing that needs to be fixed to recover decent performance. Note: this value is exactly cpu_ns_tot divided by cpu_calls. ``` **cpu\_ns\_tot** : integer ``` Returns the total number of nanoseconds spent in each call to the task processing the stream or current request. This number is reset for each new request on the same connections in case of HTTP keep-alive. This value indicates the overall cost of processing the request or the connection for each call. There is no good nor bad value but the time spent in a call automatically causes latency for other processing (see lat_ns_avg below), induces CPU costs on the machine, and may affect other connection's apparent response time. Certain operations like compression, complex regex matching or heavy Lua operations may directly affect this value, and having it in the logs will make it easier to spot the faulty processing that needs to be fixed to recover decent performance. The value may be artificially high due to a high cpu_calls count, for example when processing many HTTP chunks, and for this reason it is often preferred to log cpu_ns_avg instead. ``` **date**([<offset>],[<unit>]) : integer ``` Returns the current date as the epoch (number of seconds since 01/01/1970). If an offset value is specified, then it is added to the current date before returning the value. This is particularly useful to compute relative dates, as both positive and negative offsets are allowed. It is useful combined with the http_date converter. <unit> is facultative, and can be set to "s" for seconds (default behavior), "ms" for milliseconds or "us" for microseconds. If unit is set, return value is an integer reflecting either seconds, milliseconds or microseconds since epoch, plus offset. It is useful when a time resolution of less than a second is needed. ``` Example : ``` # set an expires header to now+1 hour in every response http-response set-header Expires %[date(3600),http_date] # set an expires header to now+1 hour in every response, with # millisecond granularity http-response set-header Expires %[date(3600000,ms),http_date(0,ms)] ``` **date\_us** : integer ``` Return the microseconds part of the date (the "second" part is returned by date sample). This sample is coherent with the date sample as it is comes from the same timeval structure. ``` **env**(<name>) : string ``` Returns a string containing the value of environment variable <name>. As a reminder, environment variables are per-process and are sampled when the process starts. This can be useful to pass some information to a next hop server, or with ACLs to take specific action when the process is started a certain way. ``` Examples : ``` # Pass the Via header to next hop with the local hostname in it http-request add-header Via 1.1\ %[env(HOSTNAME)] # reject cookie-less requests when the STOP environment variable is set http-request deny if !{ req.cook(SESSIONID) -m found } { env(STOP) -m found } ``` **fe\_conn**([<frontend>]) : integer ``` Returns the number of currently established connections on the frontend, possibly including the connection being evaluated. If no frontend name is specified, the current one is used. But it is also possible to check another frontend. It can be used to return a sorry page before hard-blocking, or to use a specific backend to drain new requests when the farm is considered full. This is mostly used with ACLs but can also be used to pass some statistics to servers in HTTP headers. See also the "[dst\_conn](#dst_conn)", "[be\_conn](#be_conn)", "[fe\_sess\_rate](#fe_sess_rate)" fetches. ``` **fe\_req\_rate**([<frontend>]) : integer ``` Returns an integer value corresponding to the number of HTTP requests per second sent to a frontend. This number can differ from "[fe\_sess\_rate](#fe_sess_rate)" in situations where client-side keep-alive is enabled. ``` **fe\_sess\_rate**([<frontend>]) : integer ``` Returns an integer value corresponding to the sessions creation rate on the frontend, in number of new sessions per second. This is used with ACLs to limit the incoming session rate to an acceptable range in order to prevent abuse of service at the earliest moment, for example when combined with other layer 4 ACLs in order to force the clients to wait a bit for the rate to go down below the limit. It can also be useful to add this element to logs using a log-format directive. See also the "[rate-limit sessions](#rate-limit%20sessions)" directive for use in frontends. ``` Example : ``` # This frontend limits incoming mails to 10/s with a max of 100 # concurrent connections. We accept any connection below 10/s, and # force excess clients to wait for 100 ms. Since clients are limited to # 100 max, there cannot be more than 10 incoming mails per second. frontend mail bind :25 mode tcp maxconn 100 acl too_fast fe_sess_rate ge 10 tcp-request inspect-delay 100ms tcp-request content accept if ! too_fast tcp-request content accept if WAIT_END ``` **hostname** : string ``` Returns the system hostname. ``` **int**(<integer>) : signed integer ``` Returns a signed integer. ``` **ipv4**(<ipv4>) : ipv4 ``` Returns an ipv4. ``` **ipv6**(<ipv6>) : ipv6 ``` Returns an ipv6. ``` **last\_rule\_file** : string ``` This returns the name of the configuration file containing the last final rule that was matched during stream analysis. A final rule is one that terminates the evaluation of the rule set (like an "accept", "deny" or "[redirect](#redirect)"). This works for TCP request and response rules acting on the "content" rulesets, and on HTTP rules from "http-request", "http-response" and "[http-after-response](#http-after-response)" rule sets. The legacy "[redirect](#redirect)" rulesets are not supported (such information is not stored there), and neither "tcp-request connection" nor "[tcp-request session](#tcp-request%20session)" rulesets are supported because the information is stored at the stream level and streams do not exist during these rules. The main purpose of this function is to be able to report in logs where was the rule that gave the final verdict, in order to help figure why a request was denied for example. See also "[last\_rule\_line](#last_rule_line)". ``` **last\_rule\_line** : integer ``` This returns the line number in the configuration file where is located the last final rule that was matched during stream analysis. A final rule is one that terminates the evaluation of the rule set (like an "accept", "deny" or "[redirect](#redirect)"). This works for TCP request and response rules acting on the "content" rulesets, and on HTTP rules from "http-request", "http-response" and "[http-after-response](#http-after-response)" rule sets. The legacy "[redirect](#redirect)" rulesets are not supported (such information is not stored there), and neither "tcp-request connection" nor "[tcp-request session](#tcp-request%20session)" rulesets are supported because the information is stored at the stream level and streams do not exist during these rules. The main purpose of this function is to be able to report in logs where was the rule that gave the final verdict, in order to help figure why a request was denied for example. See also "[last\_rule\_file](#last_rule_file)". ``` **lat\_ns\_avg** : integer ``` Returns the average number of nanoseconds spent between the moment the task handling the stream is woken up and the moment it is effectively called. This number is reset for each new request on the same connections in case of HTTP keep-alive. This value indicates the overall latency inflicted to the current request by all other requests being processed in parallel, and is a direct indicator of perceived performance due to noisy neighbours. In order to keep the value low, it is possible to reduce the scheduler's run queue depth using "[tune.runqueue-depth](#tune.runqueue-depth)", to reduce the number of concurrent events processed at once using "[tune.maxpollevents](#tune.maxpollevents)", to decrease the stream's nice value using the "[nice](#nice)" option on the "bind" lines or in the frontend, to enable low latency scheduling using "[tune.sched.low-latency](#tune.sched.low-latency)", or to look for other heavy requests in logs (those exhibiting large values of "[cpu\_ns\_avg](#cpu_ns_avg)"), whose processing needs to be adjusted or fixed. Compression of large buffers could be a culprit, like heavy regex or long lists of regex. Note: this value is exactly lat_ns_tot divided by cpu_calls. ``` **lat\_ns\_tot** : integer ``` Returns the total number of nanoseconds spent between the moment the task handling the stream is woken up and the moment it is effectively called. This number is reset for each new request on the same connections in case of HTTP keep-alive. This value indicates the overall latency inflicted to the current request by all other requests being processed in parallel, and is a direct indicator of perceived performance due to noisy neighbours. In order to keep the value low, it is possible to reduce the scheduler's run queue depth using "[tune.runqueue-depth](#tune.runqueue-depth)", to reduce the number of concurrent events processed at once using "[tune.maxpollevents](#tune.maxpollevents)", to decrease the stream's nice value using the "[nice](#nice)" option on the "bind" lines or in the frontend, to enable low latency scheduling using "[tune.sched.low-latency](#tune.sched.low-latency)", or to look for other heavy requests in logs (those exhibiting large values of "[cpu\_ns\_avg](#cpu_ns_avg)"), whose processing needs to be adjusted or fixed. Compression of large buffers could be a culprit, like heavy regex or long lists of regex. Note: while it may intuitively seem that the total latency adds to a transfer time, it is almost never true because while a task waits for the CPU, network buffers continue to fill up and the next call will process more at once. The value may be artificially high due to a high cpu_calls count, for example when processing many HTTP chunks, and for this reason it is often preferred to log lat_ns_avg instead, which is a more relevant performance indicator. ``` **meth**(<method>) : method ``` Returns a method. ``` **nbsrv**([<backend>]) : integer ``` Returns an integer value corresponding to the number of usable servers of either the current backend or the named backend. This is mostly used with ACLs but can also be useful when added to logs. This is normally used to switch to an alternate backend when the number of servers is too low to to handle some load. It is useful to report a failure when combined with "[monitor fail](#monitor%20fail)". ``` **prio\_class** : integer ``` Returns the priority class of the current session for http mode or connection for tcp mode. The value will be that set by the last call to "http-request set-priority-class" or "[tcp-request content set-priority-class](#tcp-request%20content%20set-priority-class)". ``` **prio\_offset** : integer ``` Returns the priority offset of the current session for http mode or connection for tcp mode. The value will be that set by the last call to "[http-request set-priority-offset](#http-request%20set-priority-offset)" or "tcp-request content set-priority-offset". ``` **proc** : integer ``` Always returns value 1 (historically it would return the calling process number). ``` **queue**([<backend>]) : integer ``` Returns the total number of queued connections of the designated backend, including all the connections in server queues. If no backend name is specified, the current one is used, but it is also possible to check another one. This is useful with ACLs or to pass statistics to backend servers. This can be used to take actions when queuing goes above a known level, generally indicating a surge of traffic or a massive slowdown on the servers. One possible action could be to reject new users but still accept old ones. See also the "[avg\_queue](#avg_queue)", "[be\_conn](#be_conn)", and "[be\_sess\_rate](#be_sess_rate)" fetches. ``` **rand**([<range>]) : integer ``` Returns a random integer value within a range of <range> possible values, starting at zero. If the range is not specified, it defaults to 2^32, which gives numbers between 0 and 4294967295. It can be useful to pass some values needed to take some routing decisions for example, or just for debugging purposes. This random must not be used for security purposes. ``` **srv\_conn**([<backend>/]<server>) : integer ``` Returns an integer value corresponding to the number of currently established connections on the designated server, possibly including the connection being evaluated. If <backend> is omitted, then the server is looked up in the current backend. It can be used to use a specific farm when one server is full, or to inform the server about our view of the number of active connections with it. See also the "[fe\_conn](#fe_conn)", "[be\_conn](#be_conn)", "[queue](#queue)", and "[srv\_conn\_free](#srv_conn_free)" fetch methods. ``` **srv\_conn\_free**([<backend>/]<server>) : integer ``` Returns an integer value corresponding to the number of available connections on the designated server, possibly including the connection being evaluated. The value does not include queue slots. If <backend> is omitted, then the server is looked up in the current backend. It can be used to use a specific farm when one server is full, or to inform the server about our view of the number of active connections with it. See also the "[be\_conn\_free](#be_conn_free)" and "[srv\_conn](#srv_conn)" fetch methods. OTHER CAVEATS AND NOTES: If the server maxconn is 0, then this fetch clearly does not make sense, in which case the value returned will be -1. ``` **srv\_is\_up**([<backend>/]<server>) : boolean ``` Returns true when the designated server is UP, and false when it is either DOWN or in maintenance mode. If <backend> is omitted, then the server is looked up in the current backend. It is mainly used to take action based on an external status reported via a health check (e.g. a geographical site's availability). Another possible use which is more of a hack consists in using dummy servers as boolean variables that can be enabled or disabled from the CLI, so that rules depending on those ACLs can be tweaked in realtime. ``` **srv\_queue**([<backend>/]<server>) : integer ``` Returns an integer value corresponding to the number of connections currently pending in the designated server's queue. If <backend> is omitted, then the server is looked up in the current backend. It can sometimes be used together with the "[use-server](#use-server)" directive to force to use a known faster server when it is not much loaded. See also the "[srv\_conn](#srv_conn)", "[avg\_queue](#avg_queue)" and "[queue](#queue)" sample fetch methods. ``` **srv\_sess\_rate**([<backend>/]<server>) : integer ``` Returns an integer corresponding to the sessions creation rate on the designated server, in number of new sessions per second. If <backend> is omitted, then the server is looked up in the current backend. This is mostly used with ACLs but can make sense with logs too. This is used to switch to an alternate backend when an expensive or fragile one reaches too high a session rate, or to limit abuse of service (e.g. prevent latent requests from overloading servers). ``` Example : ``` # Redirect to a separate back acl srv1_full srv_sess_rate(be1/srv1) gt 50 acl srv2_full srv_sess_rate(be1/srv2) gt 50 use_backend be2 if srv1_full or srv2_full ``` **srv\_iweight**([<backend>/]<server>) : integer ``` Returns an integer corresponding to the server's initial weight. If <backend> is omitted, then the server is looked up in the current backend. See also "[srv\_weight](#srv_weight)" and "[srv\_uweight](#srv_uweight)". ``` **srv\_uweight**([<backend>/]<server>) : integer ``` Returns an integer corresponding to the user visible server's weight. If <backend> is omitted, then the server is looked up in the current backend. See also "[srv\_weight](#srv_weight)" and "[srv\_iweight](#srv_iweight)". ``` **srv\_weight**([<backend>/]<server>) : integer ``` Returns an integer corresponding to the current (or effective) server's weight. If <backend> is omitted, then the server is looked up in the current backend. See also "[srv\_iweight](#srv_iweight)" and "[srv\_uweight](#srv_uweight)". ``` **stopping** : boolean ``` Returns TRUE if the process calling the function is currently stopping. This can be useful for logging, or for relaxing certain checks or helping close certain connections upon graceful shutdown. ``` **str**(<string>) : string ``` Returns a string. ``` **table\_avl**([<table>]) : integer ``` Returns the total number of available entries in the current proxy's stick-table or in the designated stick-table. See also table_cnt. ``` **table\_cnt**([<table>]) : integer ``` Returns the total number of entries currently in use in the current proxy's stick-table or in the designated stick-table. See also src_conn_cnt and table_avl for other entry counting methods. ``` **thread** : integer ``` Returns an integer value corresponding to the position of the thread calling the function, between 0 and (global.nbthread-1). This is useful for logging and debugging purposes. ``` **uuid**([<version>]) : string ``` Returns a UUID following the RFC4122 standard. If the version is not specified, a UUID version 4 (fully random) is returned. Currently, only version 4 is supported. ``` **var**(<var-name>[,<default>]) : undefined ``` Returns a variable with the stored type. If the variable is not set, the sample fetch fails, unless a default value is provided, in which case it will return it as a string. Empty strings are permitted. The name of the variable starts with an indication about its scope. The scopes allowed are: "[proc](#proc)" : the variable is shared with the whole process "sess" : the variable is shared with the whole session "txn" : the variable is shared with the transaction (request and response), "req" : the variable is shared only during request processing, "res" : the variable is shared only during response processing. This prefix is followed by a name. The separator is a '.'. The name may only contain characters 'a-z', 'A-Z', '0-9', '.' and '_'. ``` #### 7.3.3. Fetching samples at Layer 4 ``` The layer 4 usually describes just the transport layer which in HAProxy is closest to the connection, where no content is yet made available. The fetch methods described here are usable as low as the "[tcp-request connection](#tcp-request%20connection)" rule sets unless they require some future information. Those generally include TCP/IP addresses and ports, as well as elements from stick-tables related to the incoming connection. For retrieving a value from a sticky counters, the counter number can be explicitly set as 0, 1, or 2 using the pre-defined "sc0_", "sc1_", or "sc2_" prefix. These three pre-defined prefixes can only be used if MAX_SESS_STKCTR value does not exceed 3, otherwise the counter number can be specified as the first integer argument when using the "sc_" prefix. Starting from "sc_0" to "sc_N" where N is (MAX_SESS_STKCTR-1). An optional table may be specified with the "sc*" form, in which case the currently tracked key will be looked up into this alternate table instead of the table currently being tracked. ``` **bc\_dst** : ip ``` This is the destination ip address of the connection on the server side, which is the server address HAProxy connected to. It is of type IP and works on both IPv4 and IPv6 tables. On IPv6 tables, IPv4 address is mapped to its IPv6 equivalent, according to RFC 4291. ``` **bc\_dst\_port** : integer ``` Returns an integer value corresponding to the destination TCP port of the connection on the server side, which is the port HAProxy connected to. ``` **bc\_err** : integer ``` Returns the ID of the error that might have occurred on the current backend connection. See the "[fc\_err\_str](#fc_err_str)" fetch for a full list of error codes and their corresponding error message. ``` **bc\_err\_str** : string ``` Returns an error message describing what problem happened on the current backend connection, resulting in a connection failure. See the "[fc\_err\_str](#fc_err_str)" fetch for a full list of error codes and their corresponding error message. ``` **bc\_http\_major** : integer ``` Returns the backend connection's HTTP major version encoding, which may be 1 for HTTP/0.9 to HTTP/1.1 or 2 for HTTP/2. Note, this is based on the on-wire encoding and not the version present in the request header. ``` **bc\_src** : ip ``` This is the source ip address of the connection on the server side, which is the server address HAProxy connected from. It is of type IP and works on both IPv4 and IPv6 tables. On IPv6 tables, IPv4 addresses are mapped to their IPv6 equivalent, according to RFC 4291. ``` **bc\_src\_port** : integer ``` Returns an integer value corresponding to the TCP source port of the connection on the server side, which is the port HAProxy connected from. ``` **be\_id** : integer ``` Returns an integer containing the current backend's id. It can be used in frontends with responses to check which backend processed the request. It can also be used in a tcp-check or an http-check ruleset. ``` **be\_name** : string ``` Returns a string containing the current backend's name. It can be used in frontends with responses to check which backend processed the request. It can also be used in a tcp-check or an http-check ruleset. ``` **be\_server\_timeout** : integer ``` Returns the configuration value in millisecond for the server timeout of the current backend. This timeout can be overwritten by a "set-timeout" rule. See also the "[cur\_server\_timeout](#cur_server_timeout)". ``` **be\_tunnel\_timeout** : integer ``` Returns the configuration value in millisecond for the tunnel timeout of the current backend. This timeout can be overwritten by a "set-timeout" rule. See also the "[cur\_tunnel\_timeout](#cur_tunnel_timeout)". ``` **cur\_server\_timeout** : integer ``` Returns the currently applied server timeout in millisecond for the stream. In the default case, this will be equal to be_server_timeout unless a "set-timeout" rule has been applied. See also "[be\_server\_timeout](#be_server_timeout)". ``` **cur\_tunnel\_timeout** : integer ``` Returns the currently applied tunnel timeout in millisecond for the stream. In the default case, this will be equal to be_tunnel_timeout unless a "set-timeout" rule has been applied. See also "[be\_tunnel\_timeout](#be_tunnel_timeout)". ``` **dst** : ip ``` This is the destination IP address of the connection on the client side, which is the address the client connected to. Any tcp/http rules may alter this address. It can be useful when running in transparent mode. It is of type IP and works on both IPv4 and IPv6 tables. On IPv6 tables, IPv4 address is mapped to its IPv6 equivalent, according to RFC 4291. When the incoming connection passed through address translation or redirection involving connection tracking, the original destination address before the redirection will be reported. On Linux systems, the source and destination may seldom appear reversed if the nf_conntrack_tcp_loose sysctl is set, because a late response may reopen a timed out connection and switch what is believed to be the source and the destination. ``` **dst\_conn** : integer ``` Returns an integer value corresponding to the number of currently established connections on the same socket including the one being evaluated. It is normally used with ACLs but can as well be used to pass the information to servers in an HTTP header or in logs. It can be used to either return a sorry page before hard-blocking, or to use a specific backend to drain new requests when the socket is considered saturated. This offers the ability to assign different limits to different listening ports or addresses. See also the "[fe\_conn](#fe_conn)" and "[be\_conn](#be_conn)" fetches. ``` **dst\_is\_local** : boolean ``` Returns true if the destination address of the incoming connection is local to the system, or false if the address doesn't exist on the system, meaning that it was intercepted in transparent mode. It can be useful to apply certain rules by default to forwarded traffic and other rules to the traffic targeting the real address of the machine. For example the stats page could be delivered only on this address, or SSH access could be locally redirected. Please note that the check involves a few system calls, so it's better to do it only once per connection. ``` **dst\_port** : integer ``` Returns an integer value corresponding to the destination TCP port of the connection on the client side, which is the port the client connected to. Any tcp/http rules may alter this address. This might be used when running in transparent mode, when assigning dynamic ports to some clients for a whole application session, to stick all users to a same server, or to pass the destination port information to a server using an HTTP header. ``` **fc\_dst** : ip ``` This is the original destination IP address of the connection on the client side. Only "[tcp-request connection](#tcp-request%20connection)" rules may alter this address. See "[dst](#dst)" for details. ``` **fc\_dst\_is\_local** : boolean ``` Returns true if the original destination address of the incoming connection is local to the system, or false if the address doesn't exist on the system. See "[dst\_is\_local](#dst_is_local)" for details. ``` **fc\_dst\_port** : integer ``` Returns an integer value corresponding to the original destination TCP port of the connection on the client side. Only "[tcp-request connection](#tcp-request%20connection)" rules may alter this address. See "dst-port" for details. ``` **fc\_err** : integer ``` Returns the ID of the error that might have occurred on the current connection. Any strictly positive value of this fetch indicates that the connection did not succeed and would result in an error log being output (as described in [section 8.2.6](#8.2.6)). See the "[fc\_err\_str](#fc_err_str)" fetch for a full list of error codes and their corresponding error message. ``` **fc\_err\_str** : string ``` Returns an error message describing what problem happened on the current connection, resulting in a connection failure. This string corresponds to the "message" part of the error log format (see [section 8.2.6](#8.2.6)). See below for a full list of error codes and their corresponding error messages : +----+---------------------------------------------------------------------------+ | ID | message | +----+---------------------------------------------------------------------------+ | 0 | "Success" | | 1 | "Reached configured maxconn value" | | 2 | "Too many sockets on the process" | | 3 | "Too many sockets on the system" | | 4 | "Out of system buffers" | | 5 | "Protocol or address family not supported" | | 6 | "General socket error" | | 7 | "Source port range exhausted" | | 8 | "Can't bind to source address" | | 9 | "Out of local source ports on the system" | | 10 | "Local source address already in use" | | 11 | "Connection closed while waiting for PROXY protocol header" | | 12 | "Connection error while waiting for PROXY protocol header" | | 13 | "Timeout while waiting for PROXY protocol header" | | 14 | "Truncated PROXY protocol header received" | | 15 | "Received something which does not look like a PROXY protocol header" | | 16 | "Received an invalid PROXY protocol header" | | 17 | "Received an unhandled protocol in the PROXY protocol header" | | 18 | "Connection closed while waiting for NetScaler Client IP header" | | 19 | "Connection error while waiting for NetScaler Client IP header" | | 20 | "Timeout while waiting for a NetScaler Client IP header" | | 21 | "Truncated NetScaler Client IP header received" | | 22 | "Received an invalid NetScaler Client IP magic number" | | 23 | "Received an unhandled protocol in the NetScaler Client IP header" | | 24 | "Connection closed during SSL handshake" | | 25 | "Connection error during SSL handshake" | | 26 | "Timeout during SSL handshake" | | 27 | "Too many SSL connections" | | 28 | "Out of memory when initializing an SSL connection" | | 29 | "Rejected a client-initiated SSL renegotiation attempt" | | 30 | "SSL client CA chain cannot be verified" | | 31 | "SSL client certificate not trusted" | | 32 | "Server presented an SSL certificate different from the configured one" | | 33 | "Server presented an SSL certificate different from the expected one" | | 34 | "SSL handshake failure" | | 35 | "SSL handshake failure after heartbeat" | | 36 | "Stopped a TLSv1 heartbeat attack (CVE-2014-0160)" | | 37 | "Attempt to use SSL on an unknown target (internal error)" | | 38 | "Server refused early data" | | 39 | "SOCKS4 Proxy write error during handshake" | | 40 | "SOCKS4 Proxy read error during handshake" | | 41 | "SOCKS4 Proxy deny the request" | | 42 | "SOCKS4 Proxy handshake aborted by server" | | 43 | "SSL fatal error" | +----+---------------------------------------------------------------------------+ ``` **fc\_fackets** : integer ``` Returns the fack counter measured by the kernel for the client connection. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fc\_http\_major** : integer ``` Reports the front connection's HTTP major version encoding, which may be 1 for HTTP/0.9 to HTTP/1.1 or 2 for HTTP/2. Note, this is based on the on-wire encoding and not on the version present in the request header. ``` **fc\_lost** : integer ``` Returns the lost counter measured by the kernel for the client connection. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fc\_pp\_authority** : string ``` Returns the authority TLV sent by the client in the PROXY protocol header, if any. ``` **fc\_pp\_unique\_id** : string ``` Returns the unique ID TLV sent by the client in the PROXY protocol header, if any. ``` **fc\_rcvd\_proxy** : boolean ``` Returns true if the client initiated the connection with a PROXY protocol header. ``` **fc\_reordering** : integer ``` Returns the reordering counter measured by the kernel for the client connection. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fc\_retrans** : integer ``` Returns the retransmits counter measured by the kernel for the client connection. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fc\_rtt**(<unit>) : integer ``` Returns the Round Trip Time (RTT) measured by the kernel for the client connection. <unit> is facultative, by default the unit is milliseconds. <unit> can be set to "ms" for milliseconds or "us" for microseconds. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fc\_rttvar**(<unit>) : integer ``` Returns the Round Trip Time (RTT) variance measured by the kernel for the client connection. <unit> is facultative, by default the unit is milliseconds. <unit> can be set to "ms" for milliseconds or "us" for microseconds. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fc\_sacked** : integer ``` Returns the sacked counter measured by the kernel for the client connection. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fc\_src** : ip ``` This is the original destination IP address of the connection on the client side. Only "[tcp-request connection](#tcp-request%20connection)" rules may alter this address. See "[src](#src)" for details. ``` **fc\_src\_is\_local** : boolean ``` Returns true if the source address of incoming connection is local to the system, or false if the address doesn't exist on the system. See "[src\_is\_local](#src_is_local)" for details. ``` **fc\_src\_port** : integer ``` Returns an integer value corresponding to the TCP source port of the connection on the client side. Only "[tcp-request connection](#tcp-request%20connection)" rules may alter this address. See "src-port" for details. ``` **fc\_unacked** : integer ``` Returns the unacked counter measured by the kernel for the client connection. If the server connection is not established, if the connection is not TCP or if the operating system does not support TCP_INFO, for example Linux kernels before 2.4, the sample fetch fails. ``` **fe\_defbe** : string ``` Returns a string containing the frontend's default backend name. It can be used in frontends to check which backend will handle requests by default. ``` **fe\_id** : integer ``` Returns an integer containing the current frontend's id. It can be used in backends to check from which frontend it was called, or to stick all users coming via a same frontend to the same server. ``` **fe\_name** : string ``` Returns a string containing the current frontend's name. It can be used in backends to check from which frontend it was called, or to stick all users coming via a same frontend to the same server. ``` **fe\_client\_timeout** : integer ``` Returns the configuration value in millisecond for the client timeout of the current frontend. ``` **sc\_bytes\_in\_rate**(<ctr>[,<table>]) : integer **sc0\_bytes\_in\_rate**([<table>]) : integer **sc1\_bytes\_in\_rate**([<table>]) : integer **sc2\_bytes\_in\_rate**([<table>]) : integer ``` Returns the average client-to-server bytes rate from the currently tracked counters, measured in amount of bytes over the period configured in the table. See also src_bytes_in_rate. ``` **sc\_bytes\_out\_rate**(<ctr>[,<table>]) : integer **sc0\_bytes\_out\_rate**([<table>]) : integer **sc1\_bytes\_out\_rate**([<table>]) : integer **sc2\_bytes\_out\_rate**([<table>]) : integer ``` Returns the average server-to-client bytes rate from the currently tracked counters, measured in amount of bytes over the period configured in the table. See also src_bytes_out_rate. ``` **sc\_clr\_gpc**(<idx>,<ctr>[,<table>]) : integer ``` Clears the General Purpose Counter at the index <idx> of the array associated to the designated tracked counter of ID <ctr> from current proxy's stick table or from the designated stick-table <table>, and returns its previous value. <idx> is an integer between 0 and 99 and <ctr> an integer between 0 and 2. Before the first invocation, the stored value is zero, so first invocation will always return zero. This fetch applies only to the 'gpc' array data_type (and not to the legacy 'gpc0' nor 'gpc1' data_types). ``` **sc\_clr\_gpc0**(<ctr>[,<table>]) : integer **sc0\_clr\_gpc0**([<table>]) : integer **sc1\_clr\_gpc0**([<table>]) : integer **sc2\_clr\_gpc0**([<table>]) : integer ``` Clears the first General Purpose Counter associated to the currently tracked counters, and returns its previous value. Before the first invocation, the stored value is zero, so first invocation will always return zero. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified : ``` Example: ``` # block if 5 consecutive requests continue to come faster than 10 sess # per second, and reset the counter as soon as the traffic slows down. acl abuse sc0_http_req_rate gt 10 acl kill sc0_inc_gpc0 gt 5 acl save sc0_clr_gpc0 ge 0 tcp-request connection accept if !abuse save tcp-request connection reject if abuse kill ``` **sc\_clr\_gpc1**(<ctr>[,<table>]) : integer **sc0\_clr\_gpc1**([<table>]) : integer **sc1\_clr\_gpc1**([<table>]) : integer **sc2\_clr\_gpc1**([<table>]) : integer ``` Clears the second General Purpose Counter associated to the currently tracked counters, and returns its previous value. Before the first invocation, the stored value is zero, so first invocation will always return zero. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified. ``` **sc\_conn\_cnt**(<ctr>[,<table>]) : integer **sc0\_conn\_cnt**([<table>]) : integer **sc1\_conn\_cnt**([<table>]) : integer **sc2\_conn\_cnt**([<table>]) : integer ``` Returns the cumulative number of incoming connections from currently tracked counters. See also src_conn_cnt. ``` **sc\_conn\_cur**(<ctr>[,<table>]) : integer **sc0\_conn\_cur**([<table>]) : integer **sc1\_conn\_cur**([<table>]) : integer **sc2\_conn\_cur**([<table>]) : integer ``` Returns the current amount of concurrent connections tracking the same tracked counters. This number is automatically incremented when tracking begins and decremented when tracking stops. See also src_conn_cur. ``` **sc\_conn\_rate**(<ctr>[,<table>]) : integer **sc0\_conn\_rate**([<table>]) : integer **sc1\_conn\_rate**([<table>]) : integer **sc2\_conn\_rate**([<table>]) : integer ``` Returns the average connection rate from the currently tracked counters, measured in amount of connections over the period configured in the table. See also src_conn_rate. ``` **sc\_get\_gpc**(<idx>,<ctr>[,<table>]) : integer ``` Returns the value of the General Purpose Counter at the index <idx> in the GPC array and associated to the currently tracked counter of ID <ctr> from the current proxy's stick-table or from the designated stick-table <table>. <idx> is an integer between 0 and 99 and <ctr> an integer between 0 and 2. If there is not gpc stored at this index, zero is returned. This fetch applies only to the 'gpc' array data_type (and not to the legacy 'gpc0' nor 'gpc1' data_types). See also src_get_gpc and sc_inc_gpc. ``` **sc\_get\_gpc0**(<ctr>[,<table>]) : integer **sc0\_get\_gpc0**([<table>]) : integer **sc1\_get\_gpc0**([<table>]) : integer **sc2\_get\_gpc0**([<table>]) : integer ``` Returns the value of the first General Purpose Counter associated to the currently tracked counters. See also src_get_gpc0 and sc/sc0/sc1/sc2_inc_gpc0. ``` **sc\_get\_gpc1**(<ctr>[,<table>]) : integer **sc0\_get\_gpc1**([<table>]) : integer **sc1\_get\_gpc1**([<table>]) : integer **sc2\_get\_gpc1**([<table>]) : integer ``` Returns the value of the second General Purpose Counter associated to the currently tracked counters. See also src_get_gpc1 and sc/sc0/sc1/sc2_inc_gpc1. ``` **sc\_get\_gpt**(<idx>,<ctr>[,<table>]) : integer ``` Returns the value of the first General Purpose Tag at the index <idx> of the array associated to the tracked counter of ID <ctr> and from the current proxy's sitck-table or the designated stick-table <table>. <idx> is an integer between 0 and 99 and <ctr> an integer between 0 and 2. If there is no GPT stored at this index, zero is returned. This fetch applies only to the 'gpt' array data_type (and not on the legacy 'gpt0' data-type). See also src_get_gpt. ``` **sc\_get\_gpt0**(<ctr>[,<table>]) : integer **sc0\_get\_gpt0**([<table>]) : integer **sc1\_get\_gpt0**([<table>]) : integer **sc2\_get\_gpt0**([<table>]) : integer ``` Returns the value of the first General Purpose Tag associated to the currently tracked counters. See also src_get_gpt0. ``` **sc\_gpc\_rate**(<idx>,<ctr>[,<table>]) : integer ``` Returns the average increment rate of the General Purpose Counter at the index <idx> of the array associated to the tracked counter of ID <ctr> from the current proxy's table or from the designated stick-table <table>. It reports the frequency which the gpc counter was incremented over the configured period. <idx> is an integer between 0 and 99 and <ctr> an integer between 0 and 2. Note that the 'gpc_rate' counter array must be stored in the stick-table for a value to be returned, as 'gpc' only holds the event count. This fetch applies only to the 'gpc_rate' array data_type (and not to the legacy 'gpc0_rate' nor 'gpc1_rate' data_types). See also src_gpc_rate, sc_get_gpc, and sc_inc_gpc. ``` **sc\_gpc0\_rate**(<ctr>[,<table>]) : integer **sc0\_gpc0\_rate**([<table>]) : integer **sc1\_gpc0\_rate**([<table>]) : integer **sc2\_gpc0\_rate**([<table>]) : integer ``` Returns the average increment rate of the first General Purpose Counter associated to the currently tracked counters. It reports the frequency which the gpc0 counter was incremented over the configured period. See also src_gpc0_rate, sc/sc0/sc1/sc2_get_gpc0, and sc/sc0/sc1/sc2_inc_gpc0. Note that the "gpc0_rate" counter must be stored in the stick-table for a value to be returned, as "gpc0" only holds the event count. ``` **sc\_gpc1\_rate**(<ctr>[,<table>]) : integer **sc0\_gpc1\_rate**([<table>]) : integer **sc1\_gpc1\_rate**([<table>]) : integer **sc2\_gpc1\_rate**([<table>]) : integer ``` Returns the average increment rate of the second General Purpose Counter associated to the currently tracked counters. It reports the frequency which the gpc1 counter was incremented over the configured period. See also src_gpcA_rate, sc/sc0/sc1/sc2_get_gpc1, and sc/sc0/sc1/sc2_inc_gpc1. Note that the "gpc1_rate" counter must be stored in the stick-table for a value to be returned, as "gpc1" only holds the event count. ``` **sc\_http\_err\_cnt**(<ctr>[,<table>]) : integer **sc0\_http\_err\_cnt**([<table>]) : integer **sc1\_http\_err\_cnt**([<table>]) : integer **sc2\_http\_err\_cnt**([<table>]) : integer ``` Returns the cumulative number of HTTP errors from the currently tracked counters. This includes the both request errors and 4xx error responses. See also src_http_err_cnt. ``` **sc\_http\_err\_rate**(<ctr>[,<table>]) : integer **sc0\_http\_err\_rate**([<table>]) : integer **sc1\_http\_err\_rate**([<table>]) : integer **sc2\_http\_err\_rate**([<table>]) : integer ``` Returns the average rate of HTTP errors from the currently tracked counters, measured in amount of errors over the period configured in the table. This includes the both request errors and 4xx error responses. See also src_http_err_rate. ``` **sc\_http\_fail\_cnt**(<ctr>[,<table>]) : integer **sc0\_http\_fail\_cnt**([<table>]) : integer **sc1\_http\_fail\_cnt**([<table>]) : integer **sc2\_http\_fail\_cnt**([<table>]) : integer ``` Returns the cumulative number of HTTP response failures from the currently tracked counters. This includes the both response errors and 5xx status codes other than 501 and 505. See also src_http_fail_cnt. ``` **sc\_http\_fail\_rate**(<ctr>[,<table>]) : integer **sc0\_http\_fail\_rate**([<table>]) : integer **sc1\_http\_fail\_rate**([<table>]) : integer **sc2\_http\_fail\_rate**([<table>]) : integer ``` Returns the average rate of HTTP response failures from the currently tracked counters, measured in amount of failures over the period configured in the table. This includes the both response errors and 5xx status codes other than 501 and 505. See also src_http_fail_rate. ``` **sc\_http\_req\_cnt**(<ctr>[,<table>]) : integer **sc0\_http\_req\_cnt**([<table>]) : integer **sc1\_http\_req\_cnt**([<table>]) : integer **sc2\_http\_req\_cnt**([<table>]) : integer ``` Returns the cumulative number of HTTP requests from the currently tracked counters. This includes every started request, valid or not. See also src_http_req_cnt. ``` **sc\_http\_req\_rate**(<ctr>[,<table>]) : integer **sc0\_http\_req\_rate**([<table>]) : integer **sc1\_http\_req\_rate**([<table>]) : integer **sc2\_http\_req\_rate**([<table>]) : integer ``` Returns the average rate of HTTP requests from the currently tracked counters, measured in amount of requests over the period configured in the table. This includes every started request, valid or not. See also src_http_req_rate. ``` **sc\_inc\_gpc**(<idx>,<ctr>[,<table>]) : integer ``` Increments the General Purpose Counter at the index <idx> of the array associated to the designated tracked counter of ID <ctr> from current proxy's stick table or from the designated stick-table <table>, and returns its new value. <idx> is an integer between 0 and 99 and <ctr> an integer between 0 and 2. Before the first invocation, the stored value is zero, so first invocation will increase it to 1 and will return 1. This fetch applies only to the 'gpc' array data_type (and not to the legacy 'gpc0' nor 'gpc1' data_types). ``` **sc\_inc\_gpc0**(<ctr>[,<table>]) : integer **sc0\_inc\_gpc0**([<table>]) : integer **sc1\_inc\_gpc0**([<table>]) : integer **sc2\_inc\_gpc0**([<table>]) : integer ``` Increments the first General Purpose Counter associated to the currently tracked counters, and returns its new value. Before the first invocation, the stored value is zero, so first invocation will increase it to 1 and will return 1. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified : ``` Example: ``` acl abuse sc0_http_req_rate gt 10 acl kill sc0_inc_gpc0 gt 0 tcp-request connection reject if abuse kill ``` **sc\_inc\_gpc1**(<ctr>[,<table>]) : integer **sc0\_inc\_gpc1**([<table>]) : integer **sc1\_inc\_gpc1**([<table>]) : integer **sc2\_inc\_gpc1**([<table>]) : integer ``` Increments the second General Purpose Counter associated to the currently tracked counters, and returns its new value. Before the first invocation, the stored value is zero, so first invocation will increase it to 1 and will return 1. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified. ``` **sc\_kbytes\_in**(<ctr>[,<table>]) : integer **sc0\_kbytes\_in**([<table>]) : integer **sc1\_kbytes\_in**([<table>]) : integer **sc2\_kbytes\_in**([<table>]) : integer ``` Returns the total amount of client-to-server data from the currently tracked counters, measured in kilobytes. The test is currently performed on 32-bit integers, which limits values to 4 terabytes. See also src_kbytes_in. ``` **sc\_kbytes\_out**(<ctr>[,<table>]) : integer **sc0\_kbytes\_out**([<table>]) : integer **sc1\_kbytes\_out**([<table>]) : integer **sc2\_kbytes\_out**([<table>]) : integer ``` Returns the total amount of server-to-client data from the currently tracked counters, measured in kilobytes. The test is currently performed on 32-bit integers, which limits values to 4 terabytes. See also src_kbytes_out. ``` **sc\_sess\_cnt**(<ctr>[,<table>]) : integer **sc0\_sess\_cnt**([<table>]) : integer **sc1\_sess\_cnt**([<table>]) : integer **sc2\_sess\_cnt**([<table>]) : integer ``` Returns the cumulative number of incoming connections that were transformed into sessions, which means that they were accepted by a "tcp-request connection" rule, from the currently tracked counters. A backend may count more sessions than connections because each connection could result in many backend sessions if some HTTP keep-alive is performed over the connection with the client. See also src_sess_cnt. ``` **sc\_sess\_rate**(<ctr>[,<table>]) : integer **sc0\_sess\_rate**([<table>]) : integer **sc1\_sess\_rate**([<table>]) : integer **sc2\_sess\_rate**([<table>]) : integer ``` Returns the average session rate from the currently tracked counters, measured in amount of sessions over the period configured in the table. A session is a connection that got past the early "[tcp-request connection](#tcp-request%20connection)" rules. A backend may count more sessions than connections because each connection could result in many backend sessions if some HTTP keep-alive is performed over the connection with the client. See also src_sess_rate. ``` **sc\_tracked**(<ctr>[,<table>]) : boolean **sc0\_tracked**([<table>]) : boolean **sc1\_tracked**([<table>]) : boolean **sc2\_tracked**([<table>]) : boolean ``` Returns true if the designated session counter is currently being tracked by the current session. This can be useful when deciding whether or not we want to set some values in a header passed to the server. ``` **sc\_trackers**(<ctr>[,<table>]) : integer **sc0\_trackers**([<table>]) : integer **sc1\_trackers**([<table>]) : integer **sc2\_trackers**([<table>]) : integer ``` Returns the current amount of concurrent connections tracking the same tracked counters. This number is automatically incremented when tracking begins and decremented when tracking stops. It differs from sc0_conn_cur in that it does not rely on any stored information but on the table's reference count (the "use" value which is returned by "show table" on the CLI). This may sometimes be more suited for layer7 tracking. It can be used to tell a server how many concurrent connections there are from a given address for example. ``` **so\_id** : integer ``` Returns an integer containing the current listening socket's id. It is useful in frontends involving many "bind" lines, or to stick all users coming via a same socket to the same server. ``` **so\_name** : string ``` Returns a string containing the current listening socket's name, as defined with name on a "bind" line. It can serve the same purposes as so_id but with strings instead of integers. ``` **src** : ip ``` This is the source IP address of the client of the session. Any tcp/http rules may alter this address. It is of type IP and works on both IPv4 and IPv6 tables. On IPv6 tables, IPv4 addresses are mapped to their IPv6 equivalent, according to RFC 4291. Note that it is the TCP-level source address which is used, and not the address of a client behind a proxy. However if the "[accept-proxy](#accept-proxy)" or "[accept-netscaler-cip](#accept-netscaler-cip)" bind directive is used, it can be the address of a client behind another PROXY-protocol compatible component for all rule sets except "[tcp-request connection](#tcp-request%20connection)" which sees the real address. When the incoming connection passed through address translation or redirection involving connection tracking, the original destination address before the redirection will be reported. On Linux systems, the source and destination may seldom appear reversed if the nf_conntrack_tcp_loose sysctl is set, because a late response may reopen a timed out connection and switch what is believed to be the source and the destination. ``` Example: ``` # add an HTTP header in requests with the originating address' country http-request set-header X-Country %[src,map_ip(geoip.lst)] ``` **src\_bytes\_in\_rate**([<table>]) : integer ``` Returns the average bytes rate from the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in amount of bytes over the period configured in the table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_bytes_in_rate. ``` **src\_bytes\_out\_rate**([<table>]) : integer ``` Returns the average bytes rate to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in amount of bytes over the period configured in the table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_bytes_out_rate. ``` **src\_clr\_gpc**(<idx>,[<table>]) : integer ``` Clears the General Purpose Counter at the index <idx> of the array associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table <table>, and returns its previous value. <idx> is an integer between 0 and 99. If the address is not found, an entry is created and 0 is returned. This fetch applies only to the 'gpc' array data_type (and not to the legacy 'gpc0' nor 'gpc1' data_types). See also sc_clr_gpc. ``` **src\_clr\_gpc0**([<table>]) : integer ``` Clears the first General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, and returns its previous value. If the address is not found, an entry is created and 0 is returned. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified : ``` Example: ``` # block if 5 consecutive requests continue to come faster than 10 sess # per second, and reset the counter as soon as the traffic slows down. acl abuse src_http_req_rate gt 10 acl kill src_inc_gpc0 gt 5 acl save src_clr_gpc0 ge 0 tcp-request connection accept if !abuse save tcp-request connection reject if abuse kill ``` **src\_clr\_gpc1**([<table>]) : integer ``` Clears the second General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, and returns its previous value. If the address is not found, an entry is created and 0 is returned. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified. ``` **src\_conn\_cnt**([<table>]) : integer ``` Returns the cumulative number of connections initiated from the current incoming connection's source address in the current proxy's stick-table or in the designated stick-table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_conn_cnt. ``` **src\_conn\_cur**([<table>]) : integer ``` Returns the current amount of concurrent connections initiated from the current incoming connection's source address in the current proxy's stick-table or in the designated stick-table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_conn_cur. ``` **src\_conn\_rate**([<table>]) : integer ``` Returns the average connection rate from the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in amount of connections over the period configured in the table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_conn_rate. ``` **src\_get\_gpc**(<idx>,[<table>]) : integer ``` Returns the value of the General Purpose Counter at the index <idx> of the array associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table <table>. <idx> is an integer between 0 and 99. If the address is not found or there is no gpc stored at this index, zero is returned. This fetch applies only to the 'gpc' array data_type (and not on the legacy 'gpc0' nor 'gpc1' data_types). See also sc_get_gpc and src_inc_gpc. ``` **src\_get\_gpc0**([<table>]) : integer ``` Returns the value of the first General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_get_gpc0 and src_inc_gpc0. ``` **src\_get\_gpc1**([<table>]) : integer ``` Returns the value of the second General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_get_gpc1 and src_inc_gpc1. ``` **src\_get\_gpt**(<idx>[,<table>]) : integer ``` Returns the value of the first General Purpose Tag at the index <idx> of the array associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table <table>. <idx> is an integer between 0 and 99. If the address is not found or the GPT is not stored, zero is returned. See also the sc_get_gpt sample fetch keyword. ``` **src\_get\_gpt0**([<table>]) : integer ``` Returns the value of the first General Purpose Tag associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_get_gpt0. ``` **src\_gpc\_rate**(<idx>[,<table>]) : integer ``` Returns the average increment rate of the General Purpose Counter at the index <idx> of the array associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table <table>. It reports the frequency which the gpc counter was incremented over the configured period. <idx> is an integer between 0 and 99. Note that the 'gpc_rate' counter must be stored in the stick-table for a value to be returned, as 'gpc' only holds the event count. This fetch applies only to the 'gpc_rate' array data_type (and not to the legacy 'gpc0_rate' nor 'gpc1_rate' data_types). See also sc_gpc_rate, src_get_gpc, and sc_inc_gpc. ``` **src\_gpc0\_rate**([<table>]) : integer ``` Returns the average increment rate of the first General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. It reports the frequency which the gpc0 counter was incremented over the configured period. See also sc/sc0/sc1/sc2_gpc0_rate, src_get_gpc0, and sc/sc0/sc1/sc2_inc_gpc0. Note that the "gpc0_rate" counter must be stored in the stick-table for a value to be returned, as "gpc0" only holds the event count. ``` **src\_gpc1\_rate**([<table>]) : integer ``` Returns the average increment rate of the second General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. It reports the frequency which the gpc1 counter was incremented over the configured period. See also sc/sc0/sc1/sc2_gpc1_rate, src_get_gpc1, and sc/sc0/sc1/sc2_inc_gpc1. Note that the "gpc1_rate" counter must be stored in the stick-table for a value to be returned, as "gpc1" only holds the event count. ``` **src\_http\_err\_cnt**([<table>]) : integer ``` Returns the cumulative number of HTTP errors from the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. This includes the both request errors and 4xx error responses. See also sc/sc0/sc1/sc2_http_err_cnt. If the address is not found, zero is returned. ``` **src\_http\_err\_rate**([<table>]) : integer ``` Returns the average rate of HTTP errors from the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in amount of errors over the period configured in the table. This includes the both request errors and 4xx error responses. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_http_err_rate. ``` **src\_http\_fail\_cnt**([<table>]) : integer ``` Returns the cumulative number of HTTP response failures triggered by the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. This includes the both response errors and 5xx status codes other than 501 and 505. See also sc/sc0/sc1/sc2_http_fail_cnt. If the address is not found, zero is returned. ``` **src\_http\_fail\_rate**([<table>]) : integer ``` Returns the average rate of HTTP response failures triggered by the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in amount of failures over the period configured in the table. This includes the both response errors and 5xx status codes other than 501 and 505. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_http_fail_rate. ``` **src\_http\_req\_cnt**([<table>]) : integer ``` Returns the cumulative number of HTTP requests from the incoming connection's source address in the current proxy's stick-table or in the designated stick- table. This includes every started request, valid or not. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_http_req_cnt. ``` **src\_http\_req\_rate**([<table>]) : integer ``` Returns the average rate of HTTP requests from the incoming connection's source address in the current proxy's stick-table or in the designated stick- table, measured in amount of requests over the period configured in the table. This includes every started request, valid or not. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_http_req_rate. ``` **src\_inc\_gpc**(<idx>,[<table>]) : integer ``` Increments the General Purpose Counter at index <idx> of the array associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table <table>, and returns its new value. <idx> is an integer between 0 and 99. If the address is not found, an entry is created and 1 is returned. This fetch applies only to the 'gpc' array data_type (and not to the legacy 'gpc0' nor 'gpc1' data_types). See also sc_inc_gpc. ``` **src\_inc\_gpc0**([<table>]) : integer ``` Increments the first General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, and returns its new value. If the address is not found, an entry is created and 1 is returned. See also sc0/sc2/sc2_inc_gpc0. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified : ``` Example: ``` acl abuse src_http_req_rate gt 10 acl kill src_inc_gpc0 gt 0 tcp-request connection reject if abuse kill ``` **src\_inc\_gpc1**([<table>]) : integer ``` Increments the second General Purpose Counter associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, and returns its new value. If the address is not found, an entry is created and 1 is returned. See also sc0/sc2/sc2_inc_gpc1. This is typically used as a second ACL in an expression in order to mark a connection when a first ACL was verified. ``` **src\_is\_local** : boolean ``` Returns true if the source address of the incoming connection is local to the system, or false if the address doesn't exist on the system, meaning that it comes from a remote machine. Note that UNIX addresses are considered local. It can be useful to apply certain access restrictions based on where the client comes from (e.g. require auth or https for remote machines). Please note that the check involves a few system calls, so it's better to do it only once per connection. ``` **src\_kbytes\_in**([<table>]) : integer ``` Returns the total amount of data received from the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in kilobytes. If the address is not found, zero is returned. The test is currently performed on 32-bit integers, which limits values to 4 terabytes. See also sc/sc0/sc1/sc2_kbytes_in. ``` **src\_kbytes\_out**([<table>]) : integer ``` Returns the total amount of data sent to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in kilobytes. If the address is not found, zero is returned. The test is currently performed on 32-bit integers, which limits values to 4 terabytes. See also sc/sc0/sc1/sc2_kbytes_out. ``` **src\_port** : integer ``` Returns an integer value corresponding to the TCP source port of the connection on the client side, which is the port the client connected from. Any tcp/http rules may alter this address. Usage of this function is very limited as modern protocols do not care much about source ports nowadays. ``` **src\_sess\_cnt**([<table>]) : integer ``` Returns the cumulative number of connections initiated from the incoming connection's source IPv4 address in the current proxy's stick-table or in the designated stick-table, that were transformed into sessions, which means that they were accepted by "[tcp-request](#tcp-request)" rules. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_sess_cnt. ``` **src\_sess\_rate**([<table>]) : integer ``` Returns the average session rate from the incoming connection's source address in the current proxy's stick-table or in the designated stick-table, measured in amount of sessions over the period configured in the table. A session is a connection that went past the early "[tcp-request](#tcp-request)" rules. If the address is not found, zero is returned. See also sc/sc0/sc1/sc2_sess_rate. ``` **src\_updt\_conn\_cnt**([<table>]) : integer ``` Creates or updates the entry associated to the incoming connection's source address in the current proxy's stick-table or in the designated stick-table. This table must be configured to store the "conn_cnt" data type, otherwise the match will be ignored. The current count is incremented by one, and the expiration timer refreshed. The updated count is returned, so this match can't return zero. This was used to reject service abusers based on their source address. Note: it is recommended to use the more complete "track-sc*" actions in "[tcp-request](#tcp-request)" rules instead. ``` Example : ``` # This frontend limits incoming SSH connections to 3 per 10 second for # each source address, and rejects excess connections until a 10 second # silence is observed. At most 20 addresses are tracked. listen ssh bind :22 mode tcp maxconn 100 stick-table type ip size 20 expire 10s store conn_cnt tcp-request content reject if { src_updt_conn_cnt gt 3 } server local 127.0.0.1:22 ``` **srv\_id** : integer ``` Returns an integer containing the server's id when processing the response. While it's almost only used with ACLs, it may be used for logging or debugging. It can also be used in a tcp-check or an http-check ruleset. ``` **srv\_name** : string ``` Returns a string containing the server's name when processing the response. While it's almost only used with ACLs, it may be used for logging or debugging. It can also be used in a tcp-check or an http-check ruleset. ``` #### 7.3.4. Fetching samples at Layer 5 ``` The layer 5 usually describes just the session layer which in HAProxy is closest to the session once all the connection handshakes are finished, but when no content is yet made available. The fetch methods described here are usable as low as the "[tcp-request content](#tcp-request%20content)" rule sets unless they require some future information. Those generally include the results of SSL negotiations. ``` **51d.all**(<prop>[,<prop>\*]) : string ``` Returns values for the properties requested as a string, where values are separated by the delimiter specified with "[51degrees-property-separator](#51degrees-property-separator)". The device is identified using all the important HTTP headers from the request. The function can be passed up to five property names, and if a property name can't be found, the value "NoData" is returned. ``` Example : ``` # Here the header "X-51D-DeviceTypeMobileTablet" is added to the request # containing the three properties requested using all relevant headers from # the request. frontend http-in bind *:8081 default_backend servers http-request set-header X-51D-DeviceTypeMobileTablet \ %[51d.all(DeviceType,IsMobile,IsTablet)] ``` **ssl\_bc** : boolean ``` Returns true when the back connection was made via an SSL/TLS transport layer and is locally deciphered. This means the outgoing connection was made other a server with the "ssl" option. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_alg\_keysize** : integer ``` Returns the symmetric cipher key size supported in bits when the outgoing connection was made over an SSL/TLS transport layer. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_alpn** : string ``` This extracts the Application Layer Protocol Negotiation field from an outgoing connection made via a TLS transport layer. The result is a string containing the protocol name negotiated with the server. The SSL library must have been built with support for TLS extensions enabled (check haproxy -vv). Note that the TLS ALPN extension is not advertised unless the "alpn" keyword on the "server" line specifies a protocol list. Also, nothing forces the server to pick a protocol from this list, any other one may be requested. The TLS ALPN extension is meant to replace the TLS NPN extension. See also "[ssl\_bc\_npn](#ssl_bc_npn)". It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_cipher** : string ``` Returns the name of the used cipher when the outgoing connection was made over an SSL/TLS transport layer. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_client\_random** : binary ``` Returns the client random of the back connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_err** : integer ``` When the outgoing connection was made over an SSL/TLS transport layer, returns the ID of the last error of the first error stack raised on the backend side. It can raise handshake errors as well as other read or write errors occurring during the connection's lifetime. In order to get a text description of this error code, you can either use the "[ssl\_bc\_err\_str](#ssl_bc_err_str)" sample fetch or use the "openssl errstr" command (which takes an error code in hexadecimal representation as parameter). Please refer to your SSL library's documentation to find the exhaustive list of error codes. ``` **ssl\_bc\_err\_str** : string ``` When the outgoing connection was made over an SSL/TLS transport layer, returns a string representation of the last error of the first error stack that was raised on the connection from the backend's perspective. See also "[ssl\_fc\_err](#ssl_fc_err)". ``` **ssl\_bc\_is\_resumed** : boolean ``` Returns true when the back connection was made over an SSL/TLS transport layer and the newly created SSL session was resumed using a cached session or a TLS ticket. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_npn** : string ``` This extracts the Next Protocol Negotiation field from an outgoing connection made via a TLS transport layer. The result is a string containing the protocol name negotiated with the server . The SSL library must have been built with support for TLS extensions enabled (check haproxy -vv). Note that the TLS NPN extension is not advertised unless the "npn" keyword on the "server" line specifies a protocol list. Also, nothing forces the server to pick a protocol from this list, any other one may be used. Please note that the TLS NPN extension was replaced with ALPN. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_protocol** : string ``` Returns the name of the used protocol when the outgoing connection was made over an SSL/TLS transport layer. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_unique\_id** : binary ``` When the outgoing connection was made over an SSL/TLS transport layer, returns the TLS unique ID as defined in RFC5929 [section 3](#3). The unique id can be encoded to base64 using the converter: "ssl_bc_unique_id,base64". It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_server\_random** : binary ``` Returns the server random of the back connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_session\_id** : binary ``` Returns the SSL ID of the back connection when the outgoing connection was made over an SSL/TLS transport layer. It is useful to log if we want to know if session was reused or not. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_session\_key** : binary ``` Returns the SSL session master key of the back connection when the outgoing connection was made over an SSL/TLS transport layer. It is useful to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_bc\_use\_keysize** : integer ``` Returns the symmetric cipher key size used in bits when the outgoing connection was made over an SSL/TLS transport layer. It can be used in a tcp-check or an http-check ruleset. ``` **ssl\_c\_ca\_err** : integer ``` When the incoming connection was made over an SSL/TLS transport layer, returns the ID of the first error detected during verification of the client certificate at depth > 0, or 0 if no error was encountered during this verification process. Please refer to your SSL library's documentation to find the exhaustive list of error codes. ``` **ssl\_c\_ca\_err\_depth** : integer ``` When the incoming connection was made over an SSL/TLS transport layer, returns the depth in the CA chain of the first error detected during the verification of the client certificate. If no error is encountered, 0 is returned. ``` **ssl\_c\_chain\_der** : binary ``` Returns the DER formatted chain certificate presented by the client when the incoming connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. One can parse the result with any lib accepting ASN.1 DER data. It currently does not support resumed sessions. ``` **ssl\_c\_der** : binary ``` Returns the DER formatted certificate presented by the client when the incoming connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. ``` **ssl\_c\_err** : integer ``` When the incoming connection was made over an SSL/TLS transport layer, returns the ID of the first error detected during verification at depth 0, or 0 if no error was encountered during this verification process. Please refer to your SSL library's documentation to find the exhaustive list of error codes. ``` **ssl\_c\_i\_dn**([<entry>[,<occ>[,<format>]]]) : string ``` When the incoming connection was made over an SSL/TLS transport layer, returns the full distinguished name of the issuer of the certificate presented by the client when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. If a positive/negative occurrence number is specified as the optional second argument, it returns the value of the nth given entry value from the beginning/end of the DN. For instance, "ssl_c_i_dn(OU,2)" the second organization unit, and "ssl_c_i_dn(CN)" retrieves the common name. The <format> parameter allows you to receive the DN suitable for consumption by different protocols. Currently supported is rfc2253 for LDAP v3. If you'd like to modify the format only you can specify an empty string and zero for the first two parameters. Example: ssl_c_i_dn(,0,rfc2253) ``` **ssl\_c\_key\_alg** : string ``` Returns the name of the algorithm used to generate the key of the certificate presented by the client when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_c\_notafter** : string ``` Returns the end date presented by the client as a formatted string YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_c\_notbefore** : string ``` Returns the start date presented by the client as a formatted string YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_c\_s\_dn**([<entry>[,<occ>[,<format>]]]) : string ``` When the incoming connection was made over an SSL/TLS transport layer, returns the full distinguished name of the subject of the certificate presented by the client when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. If a positive/negative occurrence number is specified as the optional second argument, it returns the value of the nth given entry value from the beginning/end of the DN. For instance, "ssl_c_s_dn(OU,2)" the second organization unit, and "ssl_c_s_dn(CN)" retrieves the common name. The <format> parameter allows you to receive the DN suitable for consumption by different protocols. Currently supported is rfc2253 for LDAP v3. If you'd like to modify the format only you can specify an empty string and zero for the first two parameters. Example: ssl_c_s_dn(,0,rfc2253) ``` **ssl\_c\_serial** : binary ``` Returns the serial of the certificate presented by the client when the incoming connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. ``` **ssl\_c\_sha1** : binary ``` Returns the SHA-1 fingerprint of the certificate presented by the client when the incoming connection was made over an SSL/TLS transport layer. This can be used to stick a client to a server, or to pass this information to a server. Note that the output is binary, so if you want to pass that signature to the server, you need to encode it in hex or base64, such as in the example below: ``` Example: ``` http-request set-header X-SSL-Client-SHA1 %[ssl_c_sha1,hex] ``` **ssl\_c\_sig\_alg** : string ``` Returns the name of the algorithm used to sign the certificate presented by the client when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_c\_used** : boolean ``` Returns true if current SSL session uses a client certificate even if current connection uses SSL session resumption. See also "[ssl\_fc\_has\_crt](#ssl_fc_has_crt)". ``` **ssl\_c\_verify** : integer ``` Returns the verify result error ID when the incoming connection was made over an SSL/TLS transport layer, otherwise zero if no error is encountered. Please refer to your SSL library's documentation for an exhaustive list of error codes. ``` **ssl\_c\_version** : integer ``` Returns the version of the certificate presented by the client when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_f\_der** : binary ``` Returns the DER formatted certificate presented by the frontend when the incoming connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. ``` **ssl\_f\_i\_dn**([<entry>[,<occ>[,<format>]]]) : string ``` When the incoming connection was made over an SSL/TLS transport layer, returns the full distinguished name of the issuer of the certificate presented by the frontend when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. If a positive/negative occurrence number is specified as the optional second argument, it returns the value of the nth given entry value from the beginning/end of the DN. For instance, "ssl_f_i_dn(OU,2)" the second organization unit, and "ssl_f_i_dn(CN)" retrieves the common name. The <format> parameter allows you to receive the DN suitable for consumption by different protocols. Currently supported is rfc2253 for LDAP v3. If you'd like to modify the format only you can specify an empty string and zero for the first two parameters. Example: ssl_f_i_dn(,0,rfc2253) ``` **ssl\_f\_key\_alg** : string ``` Returns the name of the algorithm used to generate the key of the certificate presented by the frontend when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_f\_notafter** : string ``` Returns the end date presented by the frontend as a formatted string YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_f\_notbefore** : string ``` Returns the start date presented by the frontend as a formatted string YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_f\_s\_dn**([<entry>[,<occ>[,<format>]]]) : string ``` When the incoming connection was made over an SSL/TLS transport layer, returns the full distinguished name of the subject of the certificate presented by the frontend when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. If a positive/negative occurrence number is specified as the optional second argument, it returns the value of the nth given entry value from the beginning/end of the DN. For instance, "ssl_f_s_dn(OU,2)" the second organization unit, and "ssl_f_s_dn(CN)" retrieves the common name. The <format> parameter allows you to receive the DN suitable for consumption by different protocols. Currently supported is rfc2253 for LDAP v3. If you'd like to modify the format only you can specify an empty string and zero for the first two parameters. Example: ssl_f_s_dn(,0,rfc2253) ``` **ssl\_f\_serial** : binary ``` Returns the serial of the certificate presented by the frontend when the incoming connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. ``` **ssl\_f\_sha1** : binary ``` Returns the SHA-1 fingerprint of the certificate presented by the frontend when the incoming connection was made over an SSL/TLS transport layer. This can be used to know which certificate was chosen using SNI. ``` **ssl\_f\_sig\_alg** : string ``` Returns the name of the algorithm used to sign the certificate presented by the frontend when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_f\_version** : integer ``` Returns the version of the certificate presented by the frontend when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_fc** : boolean ``` Returns true when the front connection was made via an SSL/TLS transport layer and is locally deciphered. This means it has matched a socket declared with a "bind" line having the "ssl" option. ``` Example : ``` # This passes "X-Proto: https" to servers when client connects over SSL listen http-https bind :80 bind :443 ssl crt /etc/haproxy.pem http-request add-header X-Proto https if { ssl_fc } ``` **ssl\_fc\_alg\_keysize** : integer ``` Returns the symmetric cipher key size supported in bits when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_fc\_alpn** : string ``` This extracts the Application Layer Protocol Negotiation field from an incoming connection made via a TLS transport layer and locally deciphered by HAProxy. The result is a string containing the protocol name advertised by the client. The SSL library must have been built with support for TLS extensions enabled (check haproxy -vv). Note that the TLS ALPN extension is not advertised unless the "alpn" keyword on the "bind" line specifies a protocol list. Also, nothing forces the client to pick a protocol from this list, any other one may be requested. The TLS ALPN extension is meant to replace the TLS NPN extension. See also "[ssl\_fc\_npn](#ssl_fc_npn)". ``` **ssl\_fc\_cipher** : string ``` Returns the name of the used cipher when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_fc\_cipherlist\_bin**([<filter\_option>]) : binary ``` Returns the binary form of the client hello cipher list. The maximum returned value length is limited by the shared capture buffer size controlled by "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" setting. Setting <filter_option> allows to filter returned data. Accepted values: 0 : return the full list of ciphers (default) 1 : exclude GREASE (RFC8701) values from the output ``` Example: ``` http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ %[ssl_fc_ecformats_bin,be2dec(-,1)] acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ -f /path/to/file/with/malware-ja3.lst http-request set-header X-Malware True if is_malware http-request set-header X-Malware False if !is_malware ``` **ssl\_fc\_cipherlist\_hex**([<filter\_option>]) : string ``` Returns the binary form of the client hello cipher list encoded as hexadecimal. The maximum returned value length is limited by the shared capture buffer size controlled by "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" setting. Setting <filter_option> allows to filter returned data. Accepted values: 0 : return the full list of ciphers (default) 1 : exclude GREASE (RFC8701) values from the output ``` **ssl\_fc\_cipherlist\_str**([<filter\_option>]) : string ``` Returns the decoded text form of the client hello cipher list. The maximum returned value length is limited by the shared capture buffer size controlled by "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" setting. Setting <filter_option> allows to filter returned data. Accepted values: 0 : return the full list of ciphers (default) 1 : exclude GREASE (RFC8701) values from the output Note that this sample-fetch is only available with OpenSSL >= 1.0.2. If the function is not enabled, this sample-fetch returns the hash like "[ssl\_fc\_cipherlist\_xxh](#ssl_fc_cipherlist_xxh)". ``` **ssl\_fc\_cipherlist\_xxh** : integer ``` Returns a xxh64 of the cipher list. This hash can return only if the value "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" is set greater than 0, however the hash take into account all the data of the cipher list. ``` **ssl\_fc\_ecformats\_bin** : binary ``` Return the binary form of the client hello supported elliptic curve point formats. The maximum returned value length is limited by the shared capture buffer size controlled by "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" setting. ``` Example: ``` http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ %[ssl_fc_ecformats_bin,be2dec(-,1)] acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ -f /path/to/file/with/malware-ja3.lst http-request set-header X-Malware True if is_malware http-request set-header X-Malware False if !is_malware ``` **ssl\_fc\_eclist\_bin**([<filter\_option>]) : binary ``` Returns the binary form of the client hello supported elliptic curves. The maximum returned value length is limited by the shared capture buffer size controlled by "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" setting. Setting <filter_option> allows to filter returned data. Accepted values: 0 : return the full list of supported elliptic curves (default) 1 : exclude GREASE (RFC8701) values from the output ``` Example: ``` http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ %[ssl_fc_ecformats_bin,be2dec(-,1)] acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ -f /path/to/file/with/malware-ja3.lst http-request set-header X-Malware True if is_malware http-request set-header X-Malware False if !is_malware ``` **ssl\_fc\_extlist\_bin**([<filter\_option>]) : binary ``` Returns the binary form of the client hello extension list. The maximum returned value length is limited by the shared capture buffer size controlled by "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" setting. Setting <filter_option> allows to filter returned data. Accepted values: 0 : return the full list of extensions (default) 1 : exclude GREASE (RFC8701) values from the output ``` Example: ``` http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ %[ssl_fc_ecformats_bin,be2dec(-,1)] acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ -f /path/to/file/with/malware-ja3.lst http-request set-header X-Malware True if is_malware http-request set-header X-Malware False if !is_malware ``` **ssl\_fc\_client\_random** : binary ``` Returns the client random of the front connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. ``` **ssl\_fc\_client\_early\_traffic\_secret** : string ``` Return the CLIENT_EARLY_TRAFFIC_SECRET as an hexadecimal string for the front connection when the incoming connection was made over a TLS 1.3 transport layer. Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be activated with "tune.ssl.keylog on" in the global section. See also "[tune.ssl.keylog](#tune.ssl.keylog)" ``` **ssl\_fc\_client\_handshake\_traffic\_secret** : string ``` Return the CLIENT_HANDSHAKE_TRAFFIC_SECRET as an hexadecimal string for the front connection when the incoming connection was made over a TLS 1.3 transport layer. Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be activated with "tune.ssl.keylog on" in the global section. See also "[tune.ssl.keylog](#tune.ssl.keylog)" ``` **ssl\_fc\_client\_traffic\_secret\_0** : string ``` Return the CLIENT_TRAFFIC_SECRET_0 as an hexadecimal string for the front connection when the incoming connection was made over a TLS 1.3 transport layer. Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be activated with "tune.ssl.keylog on" in the global section. See also "[tune.ssl.keylog](#tune.ssl.keylog)" ``` **ssl\_fc\_exporter\_secret** : string ``` Return the EXPORTER_SECRET as an hexadecimal string for the front connection when the incoming connection was made over a TLS 1.3 transport layer. Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be activated with "tune.ssl.keylog on" in the global section. See also "[tune.ssl.keylog](#tune.ssl.keylog)" ``` **ssl\_fc\_early\_exporter\_secret** : string ``` Return the EARLY_EXPORTER_SECRET as an hexadecimal string for the front connection when the incoming connection was made over an TLS 1.3 transport layer. Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be activated with "tune.ssl.keylog on" in the global section. See also "[tune.ssl.keylog](#tune.ssl.keylog)" ``` **ssl\_fc\_err** : integer ``` When the incoming connection was made over an SSL/TLS transport layer, returns the ID of the last error of the first error stack raised on the frontend side, or 0 if no error was encountered. It can be used to identify handshake related errors other than verify ones (such as cipher mismatch), as well as other read or write errors occurring during the connection's lifetime. Any error happening during the client's certificate verification process will not be raised through this fetch but via the existing "[ssl\_c\_err](#ssl_c_err)", "[ssl\_c\_ca\_err](#ssl_c_ca_err)" and "[ssl\_c\_ca\_err\_depth](#ssl_c_ca_err_depth)" fetches. In order to get a text description of this error code, you can either use the "[ssl\_fc\_err\_str](#ssl_fc_err_str)" sample fetch or use the "openssl errstr" command (which takes an error code in hexadecimal representation as parameter). Please refer to your SSL library's documentation to find the exhaustive list of error codes. ``` **ssl\_fc\_err\_str** : string ``` When the incoming connection was made over an SSL/TLS transport layer, returns a string representation of the last error of the first error stack that was raised on the frontend side. Any error happening during the client's certificate verification process will not be raised through this fetch. See also "[ssl\_fc\_err](#ssl_fc_err)". ``` **ssl\_fc\_has\_crt** : boolean ``` Returns true if a client certificate is present in an incoming connection over SSL/TLS transport layer. Useful if 'verify' statement is set to 'optional'. Note: on SSL session resumption with Session ID or TLS ticket, client certificate is not present in the current connection but may be retrieved from the cache or the ticket. So prefer "[ssl\_c\_used](#ssl_c_used)" if you want to check if current SSL session uses a client certificate. ``` **ssl\_fc\_has\_early** : boolean ``` Returns true if early data were sent, and the handshake didn't happen yet. As it has security implications, it is useful to be able to refuse those, or wait until the handshake happened. ``` **ssl\_fc\_has\_sni** : boolean ``` This checks for the presence of a Server Name Indication TLS extension (SNI) in an incoming connection was made over an SSL/TLS transport layer. Returns true when the incoming connection presents a TLS SNI field. This requires that the SSL library is built with support for TLS extensions enabled (check haproxy -vv). ``` **ssl\_fc\_is\_resumed** : boolean ``` Returns true if the SSL/TLS session has been resumed through the use of SSL session cache or TLS tickets on an incoming connection over an SSL/TLS transport layer. ``` **ssl\_fc\_npn** : string ``` This extracts the Next Protocol Negotiation field from an incoming connection made via a TLS transport layer and locally deciphered by HAProxy. The result is a string containing the protocol name advertised by the client. The SSL library must have been built with support for TLS extensions enabled (check haproxy -vv). Note that the TLS NPN extension is not advertised unless the "npn" keyword on the "bind" line specifies a protocol list. Also, nothing forces the client to pick a protocol from this list, any other one may be requested. Please note that the TLS NPN extension was replaced with ALPN. ``` **ssl\_fc\_protocol** : string ``` Returns the name of the used protocol when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_fc\_protocol\_hello\_id** : integer ``` The version of the TLS protocol by which the client wishes to communicate during the session as indicated in client hello message. This value can return only if the value "[tune.ssl.capture-buffer-size](#tune.ssl.capture-buffer-size)" is set greater than 0. ``` Example: ``` http-request set-header X-SSL-JA3 %[ssl_fc_protocol_hello_id],\ %[ssl_fc_cipherlist_bin(1),be2dec(-,2)],\ %[ssl_fc_extlist_bin(1),be2dec(-,2)],\ %[ssl_fc_eclist_bin(1),be2dec(-,2)],\ %[ssl_fc_ecformats_bin,be2dec(-,1)] acl is_malware req.fhdr(x-ssl-ja3),digest(md5),hex \ -f /path/to/file/with/malware-ja3.lst http-request set-header X-Malware True if is_malware http-request set-header X-Malware False if !is_malware ``` **ssl\_fc\_unique\_id** : binary ``` When the incoming connection was made over an SSL/TLS transport layer, returns the TLS unique ID as defined in RFC5929 [section 3](#3). The unique id can be encoded to base64 using the converter: "ssl_fc_unique_id,base64". ``` **ssl\_fc\_server\_handshake\_traffic\_secret** : string ``` Return the SERVER_HANDSHAKE_TRAFFIC_SECRET as an hexadecimal string for the front connection when the incoming connection was made over a TLS 1.3 transport layer. Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be activated with "tune.ssl.keylog on" in the global section. See also "[tune.ssl.keylog](#tune.ssl.keylog)" ``` **ssl\_fc\_server\_traffic\_secret\_0** : string ``` Return the SERVER_TRAFFIC_SECRET_0 as an hexadecimal string for the front connection when the incoming connection was made over an TLS 1.3 transport layer. Require OpenSSL >= 1.1.1. This is one of the keys dumped by the OpenSSL keylog callback to generate the SSLKEYLOGFILE. The SSL Key logging must be activated with "tune.ssl.keylog on" in the global section. See also "[tune.ssl.keylog](#tune.ssl.keylog)" ``` **ssl\_fc\_server\_random** : binary ``` Returns the server random of the front connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. ``` **ssl\_fc\_session\_id** : binary ``` Returns the SSL ID of the front connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to stick a given client to a server. It is important to note that some browsers refresh their session ID every few minutes. ``` **ssl\_fc\_session\_key** : binary ``` Returns the SSL session master key of the front connection when the incoming connection was made over an SSL/TLS transport layer. It is useful to decrypt traffic sent using ephemeral ciphers. This requires OpenSSL >= 1.1.0, or BoringSSL. ``` **ssl\_fc\_sni** : string ``` This extracts the Server Name Indication TLS extension (SNI) field from an incoming connection made via an SSL/TLS transport layer and locally deciphered by HAProxy. The result (when present) typically is a string matching the HTTPS host name (253 chars or less). The SSL library must have been built with support for TLS extensions enabled (check haproxy -vv). This fetch is different from "[req.ssl\_sni](#req.ssl_sni)" above in that it applies to the connection being deciphered by HAProxy and not to SSL contents being blindly forwarded. See also "ssl_fc_sni_end" and "ssl_fc_sni_reg" below. This requires that the SSL library is built with support for TLS extensions enabled (check haproxy -vv). CAUTION! Except under very specific conditions, it is normally not correct to use this field as a substitute for the HTTP "Host" header field. For example, when forwarding an HTTPS connection to a server, the SNI field must be set from the HTTP Host header field using "req.hdr(host)" and not from the front SNI value. The reason is that SNI is solely used to select the certificate the server side will present, and that clients are then allowed to send requests with different Host values as long as they match the names in the certificate. As such, "[ssl\_fc\_sni](#ssl_fc_sni)" should normally not be used as an argument to the "[sni](#sni)" server keyword, unless the backend works in TCP mode. ACL derivatives : ssl_fc_sni_end : suffix match ssl_fc_sni_reg : regex match ``` **ssl\_fc\_use\_keysize** : integer ``` Returns the symmetric cipher key size used in bits when the incoming connection was made over an SSL/TLS transport layer. ``` **ssl\_s\_der** : binary ``` Returns the DER formatted certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. ``` **ssl\_s\_chain\_der** : binary ``` Returns the DER formatted chain certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. One can parse the result with any lib accepting ASN.1 DER data. It currently does not support resumed sessions. ``` **ssl\_s\_key\_alg** : string ``` Returns the name of the algorithm used to generate the key of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. ``` **ssl\_s\_notafter** : string ``` Returns the end date presented by the server as a formatted string YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS transport layer. ``` **ssl\_s\_notbefore** : string ``` Returns the start date presented by the server as a formatted string YYMMDDhhmmss[Z] when the outgoing connection was made over an SSL/TLS transport layer. ``` **ssl\_s\_i\_dn**([<entry>[,<occ>[,<format>]]]) : string ``` When the outgoing connection was made over an SSL/TLS transport layer, returns the full distinguished name of the issuer of the certificate presented by the server when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. If a positive/negative occurrence number is specified as the optional second argument, it returns the value of the nth given entry value from the beginning/end of the DN. For instance, "ssl_s_i_dn(OU,2)" the second organization unit, and "ssl_s_i_dn(CN)" retrieves the common name. The <format> parameter allows you to receive the DN suitable for consumption by different protocols. Currently supported is rfc2253 for LDAP v3. If you'd like to modify the format only you can specify an empty string and zero for the first two parameters. Example: ssl_s_i_dn(,0,rfc2253) ``` **ssl\_s\_s\_dn**([<entry>[,<occ>[,<format>]]]) : string ``` When the outgoing connection was made over an SSL/TLS transport layer, returns the full distinguished name of the subject of the certificate presented by the server when no <entry> is specified, or the value of the first given entry found from the beginning of the DN. If a positive/negative occurrence number is specified as the optional second argument, it returns the value of the nth given entry value from the beginning/end of the DN. For instance, "ssl_s_s_dn(OU,2)" the second organization unit, and "ssl_s_s_dn(CN)" retrieves the common name. The <format> parameter allows you to receive the DN suitable for consumption by different protocols. Currently supported is rfc2253 for LDAP v3. If you'd like to modify the format only you can specify an empty string and zero for the first two parameters. Example: ssl_s_s_dn(,0,rfc2253) ``` **ssl\_s\_serial** : binary ``` Returns the serial of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. When used for an ACL, the value(s) to match against can be passed in hexadecimal form. ``` **ssl\_s\_sha1** : binary ``` Returns the SHA-1 fingerprint of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. This can be used to know which certificate was chosen using SNI. ``` **ssl\_s\_sig\_alg** : string ``` Returns the name of the algorithm used to sign the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. ``` **ssl\_s\_version** : integer ``` Returns the version of the certificate presented by the server when the outgoing connection was made over an SSL/TLS transport layer. ``` #### 7.3.5. Fetching samples from buffer contents (Layer 6) ``` Fetching samples from buffer contents is a bit different from the previous sample fetches above because the sampled data are ephemeral. These data can only be used when they're available and will be lost when they're forwarded. For this reason, samples fetched from buffer contents during a request cannot be used in a response for example. Even while the data are being fetched, they can change. Sometimes it is necessary to set some delays or combine multiple sample fetch methods to ensure that the expected data are complete and usable, for example through TCP request content inspection. Please see the "tcp-request content" keyword for more detailed information on the subject. Warning : Following sample fetches are ignored if used from HTTP proxies. They only deal with raw contents found in the buffers. On their side, HTTP proxies use structured content. Thus raw representation of these data are meaningless. A warning is emitted if an ACL relies on one of the following sample fetches. But it is not possible to detect all invalid usage (for instance inside a log-format string or a sample expression). So be careful. ``` **distcc\_body**(<token>[,<occ>]) : binary ``` Parses a distcc message and returns the body associated to occurrence #<occ> of the token <token>. Occurrences start at 1, and when unspecified, any may match though in practice only the first one is checked for now. This can be used to extract file names or arguments in files built using distcc through HAProxy. Please refer to distcc's protocol documentation for the complete list of supported tokens. ``` **distcc\_param**(<token>[,<occ>]) : integer ``` Parses a distcc message and returns the parameter associated to occurrence #<occ> of the token <token>. Occurrences start at 1, and when unspecified, any may match though in practice only the first one is checked for now. This can be used to extract certain information such as the protocol version, the file size or the argument in files built using distcc through HAProxy. Another use case consists in waiting for the start of the preprocessed file contents before connecting to the server to avoid keeping idle connections. Please refer to distcc's protocol documentation for the complete list of supported tokens. ``` Example : ``` # wait up to 20s for the pre-processed file to be uploaded tcp-request inspect-delay 20s tcp-request content accept if { distcc_param(DOTI) -m found } # send large files to the big farm use_backend big_farm if { distcc_param(DOTI) gt 1000000 } ``` **payload**(<offset>,<length>) : binary (deprecated) ``` This is an alias for "[req.payload](#req.payload)" when used in the context of a request (e.g. "[stick on](#stick%20on)", "[stick match](#stick%20match)"), and for "[res.payload](#res.payload)" when used in the context of a response such as in "stick store response". ``` **payload\_lv**(<offset1>,<length>[,<offset2>]) : binary (deprecated) ``` This is an alias for "[req.payload\_lv](#req.payload_lv)" when used in the context of a request (e.g. "[stick on](#stick%20on)", "[stick match](#stick%20match)"), and for "[res.payload\_lv](#res.payload_lv)" when used in the context of a response such as in "stick store response". ``` **req.len** : integer **req\_len** : integer (deprecated) ``` Returns an integer value corresponding to the number of bytes present in the request buffer. This is mostly used in ACL. It is important to understand that this test does not return false as long as the buffer is changing. This means that a check with equality to zero will almost always immediately match at the beginning of the session, while a test for more data will wait for that data to come in and return false only when HAProxy is certain that no more data will come in. This test was designed to be used with TCP request content inspection. ``` **req.payload**(<offset>,<length>) : binary ``` This extracts a binary block of <length> bytes and starting at byte <offset> in the request buffer. As a special case, if the <length> argument is zero, the the whole buffer from <offset> to the end is extracted. This can be used with ACLs in order to check for the presence of some content in a buffer at any location. ACL derivatives : req.payload(<offset>,<length>) : hex binary match ``` **req.payload\_lv**(<offset1>,<length>[,<offset2>]) : binary ``` This extracts a binary block whose size is specified at <offset1> for <length> bytes, and which starts at <offset2> if specified or just after the length in the request buffer. The <offset2> parameter also supports relative offsets if prepended with a '+' or '-' sign. ACL derivatives : req.payload_lv(<offset1>,<length>[,<offset2>]) : hex binary match ``` Example : ``` please consult the example from the "[stick store-response](#stick%20store-response)" keyword. ``` **req.proto\_http** : boolean **req\_proto\_http** : boolean (deprecated) ``` Returns true when data in the request buffer look like HTTP and correctly parses as such. It is the same parser as the common HTTP request parser which is used so there should be no surprises. The test does not match until the request is complete, failed or timed out. This test may be used to report the protocol in TCP logs, but the biggest use is to block TCP request analysis until a complete HTTP request is present in the buffer, for example to track a header. ``` Example: ``` # track request counts per "[base](#base)" (concatenation of Host+URL) tcp-request inspect-delay 10s tcp-request content reject if !HTTP tcp-request content track-sc0 base table req-rate ``` **req.rdp\_cookie**([<name>]) : string **rdp\_cookie**([<name>]) : string (deprecated) ``` When the request buffer looks like the RDP protocol, extracts the RDP cookie <name>, or any cookie if unspecified. The parser only checks for the first cookie, as illustrated in the RDP protocol specification. The cookie name is case insensitive. Generally the "MSTS" cookie name will be used, as it can contain the user name of the client connecting to the server if properly configured on the client. The "MSTSHASH" cookie is often used as well for session stickiness to servers. This differs from "balance rdp-cookie" in that any balancing algorithm may be used and thus the distribution of clients to backend servers is not linked to a hash of the RDP cookie. It is envisaged that using a balancing algorithm such as "balance roundrobin" or "balance leastconn" will lead to a more even distribution of clients to backend servers than the hash used by "balance rdp-cookie". ACL derivatives : req.rdp_cookie([<name>]) : exact string match ``` Example : ``` listen tse-farm bind 0.0.0.0:3389 # wait up to 5s for an RDP cookie in the request tcp-request inspect-delay 5s tcp-request content accept if RDP_COOKIE # apply RDP cookie persistence persist rdp-cookie # Persist based on the mstshash cookie # This is only useful makes sense if # balance rdp-cookie is not used stick-table type string size 204800 stick on req.rdp_cookie(mstshash) server srv1 1.1.1.1:3389 server srv1 1.1.1.2:3389 ``` **See also :** "balance rdp-cookie", "[persist rdp-cookie](#persist%20rdp-cookie)", "[tcp-request](#tcp-request)" and the "[req.rdp\_cookie](#req.rdp_cookie)" ACL. **req.rdp\_cookie\_cnt**([name]) : integer **rdp\_cookie\_cnt**([name]) : integer (deprecated) ``` Tries to parse the request buffer as RDP protocol, then returns an integer corresponding to the number of RDP cookies found. If an optional cookie name is passed, only cookies matching this name are considered. This is mostly used in ACL. ACL derivatives : req.rdp_cookie_cnt([<name>]) : integer match ``` **req.ssl\_alpn** : string ``` Returns a string containing the values of the Application-Layer Protocol Negotiation (ALPN) TLS extension (RFC7301), sent by the client within the SSL ClientHello message. Note that this only applies to raw contents found in the request buffer and not to the contents deciphered via an SSL data layer, so this will not work with "bind" lines having the "ssl" option. This is useful in ACL to make a routing decision based upon the ALPN preferences of a TLS client, like in the example below. See also "[ssl\_fc\_alpn](#ssl_fc_alpn)". ``` Examples : ``` # Wait for a client hello for at most 5 seconds tcp-request inspect-delay 5s tcp-request content accept if { req.ssl_hello_type 1 } use_backend bk_acme if { req.ssl_alpn acme-tls/1 } default_backend bk_default ``` **req.ssl\_ec\_ext** : boolean ``` Returns a boolean identifying if client sent the Supported Elliptic Curves Extension as defined in RFC4492, [section 5.1](#5.1). within the SSL ClientHello message. This can be used to present ECC compatible clients with EC certificate and to use RSA for all others, on the same IP address. Note that this only applies to raw contents found in the request buffer and not to contents deciphered via an SSL data layer, so this will not work with "bind" lines having the "ssl" option. ``` **req.ssl\_hello\_type** : integer **req\_ssl\_hello\_type** : integer (deprecated) ``` Returns an integer value containing the type of the SSL hello message found in the request buffer if the buffer contains data that parse as a complete SSL (v3 or superior) client hello message. Note that this only applies to raw contents found in the request buffer and not to contents deciphered via an SSL data layer, so this will not work with "bind" lines having the "ssl" option. This is mostly used in ACL to detect presence of an SSL hello message that is supposed to contain an SSL session ID usable for stickiness. ``` **req.ssl\_sni** : string **req\_ssl\_sni** : string (deprecated) ``` Returns a string containing the value of the Server Name TLS extension sent by a client in a TLS stream passing through the request buffer if the buffer contains data that parse as a complete SSL (v3 or superior) client hello message. Note that this only applies to raw contents found in the request buffer and not to contents deciphered via an SSL data layer, so this will not work with "bind" lines having the "ssl" option. This will only work for actual implicit TLS based protocols like HTTPS (443), IMAPS (993), SMTPS (465), however it will not work for explicit TLS based protocols, like SMTP (25/587) or IMAP (143). SNI normally contains the name of the host the client tries to connect to (for recent browsers). SNI is useful for allowing or denying access to certain hosts when SSL/TLS is used by the client. This test was designed to be used with TCP request content inspection. If content switching is needed, it is recommended to first wait for a complete client hello (type 1), like in the example below. See also "[ssl\_fc\_sni](#ssl_fc_sni)". ACL derivatives : req.ssl_sni : exact string match ``` Examples : ``` # Wait for a client hello for at most 5 seconds tcp-request inspect-delay 5s tcp-request content accept if { req.ssl_hello_type 1 } use_backend bk_allow if { req.ssl_sni -f allowed_sites } default_backend bk_sorry_page ``` **req.ssl\_st\_ext** : integer ``` Returns 0 if the client didn't send a SessionTicket TLS Extension (RFC5077) Returns 1 if the client sent SessionTicket TLS Extension Returns 2 if the client also sent non-zero length TLS SessionTicket Note that this only applies to raw contents found in the request buffer and not to contents deciphered via an SSL data layer, so this will not work with "bind" lines having the "ssl" option. This can for example be used to detect whether the client sent a SessionTicket or not and stick it accordingly, if no SessionTicket then stick on SessionID or don't stick as there's no server side state is there when SessionTickets are in use. ``` **req.ssl\_ver** : integer **req\_ssl\_ver** : integer (deprecated) ``` Returns an integer value containing the version of the SSL/TLS protocol of a stream present in the request buffer. Both SSLv2 hello messages and SSLv3 messages are supported. TLSv1 is announced as SSL version 3.1. The value is composed of the major version multiplied by 65536, added to the minor version. Note that this only applies to raw contents found in the request buffer and not to contents deciphered via an SSL data layer, so this will not work with "bind" lines having the "ssl" option. The ACL version of the test matches against a decimal notation in the form MAJOR.MINOR (e.g. 3.1). This fetch is mostly used in ACL. ACL derivatives : req.ssl_ver : decimal match ``` **res.len** : integer ``` Returns an integer value corresponding to the number of bytes present in the response buffer. This is mostly used in ACL. It is important to understand that this test does not return false as long as the buffer is changing. This means that a check with equality to zero will almost always immediately match at the beginning of the session, while a test for more data will wait for that data to come in and return false only when HAProxy is certain that no more data will come in. This test was designed to be used with TCP response content inspection. But it may also be used in tcp-check based expect rules. ``` **res.payload**(<offset>,<length>) : binary ``` This extracts a binary block of <length> bytes and starting at byte <offset> in the response buffer. As a special case, if the <length> argument is zero, the whole buffer from <offset> to the end is extracted. This can be used with ACLs in order to check for the presence of some content in a buffer at any location. It may also be used in tcp-check based expect rules. ``` **res.payload\_lv**(<offset1>,<length>[,<offset2>]) : binary ``` This extracts a binary block whose size is specified at <offset1> for <length> bytes, and which starts at <offset2> if specified or just after the length in the response buffer. The <offset2> parameter also supports relative offsets if prepended with a '+' or '-' sign. It may also be used in tcp-check based expect rules. ``` Example : ``` please consult the example from the "[stick store-response](#stick%20store-response)" keyword. ``` **res.ssl\_hello\_type** : integer **rep\_ssl\_hello\_type** : integer (deprecated) ``` Returns an integer value containing the type of the SSL hello message found in the response buffer if the buffer contains data that parses as a complete SSL (v3 or superior) hello message. Note that this only applies to raw contents found in the response buffer and not to contents deciphered via an SSL data layer, so this will not work with "server" lines having the "ssl" option. This is mostly used in ACL to detect presence of an SSL hello message that is supposed to contain an SSL session ID usable for stickiness. ``` **wait\_end** : boolean ``` This fetch either returns true when the inspection period is over, or does not fetch. It is only used in ACLs, in conjunction with content analysis to avoid returning a wrong verdict early. It may also be used to delay some actions, such as a delayed reject for some special addresses. Since it either stops the rules evaluation or immediately returns true, it is recommended to use this acl as the last one in a rule. Please note that the default ACL "WAIT_END" is always usable without prior declaration. This test was designed to be used with TCP request content inspection. ``` Examples : ``` # delay every incoming request by 2 seconds tcp-request inspect-delay 2s tcp-request content accept if WAIT_END # don't immediately tell bad guys they are rejected tcp-request inspect-delay 10s acl goodguys src 10.0.0.0/24 acl badguys src 10.0.1.0/24 tcp-request content accept if goodguys tcp-request content reject if badguys WAIT_END tcp-request content reject ``` #### 7.3.6. Fetching HTTP samples (Layer 7) ``` It is possible to fetch samples from HTTP contents, requests and responses. This application layer is also called layer 7. It is only possible to fetch the data in this section when a full HTTP request or response has been parsed from its respective request or response buffer. This is always the case with all HTTP specific rules and for sections running with "mode http". When using TCP content inspection, it may be necessary to support an inspection delay in order to let the request or response come in first. These fetches may require a bit more CPU resources than the layer 4 ones, but not much since the request and response are indexed. Note : Regarding HTTP processing from the tcp-request content rules, everything will work as expected from an HTTP proxy. However, from a TCP proxy, without an HTTP upgrade, it will only work for HTTP/1 content. For HTTP/2 content, only the preface is visible. Thus, it is only possible to rely to "[req.proto\_http](#req.proto_http)", "[req.ver](#req.ver)" and eventually "[method](#method)" sample fetches. All other L7 sample fetches will fail. After an HTTP upgrade, they will work in the same manner than from an HTTP proxy. ``` **base** : string ``` This returns the concatenation of the first Host header and the path part of the request, which starts at the first slash and ends before the question mark. It can be useful in virtual hosted environments to detect URL abuses as well as to improve shared caches efficiency. Using this with a limited size stick table also allows one to collect statistics about most commonly requested objects by host/path. With ACLs it can allow simple content switching rules involving the host and the path at the same time, such as "www.example.com/favicon.ico". See also "[path](#path)" and "uri". ACL derivatives : base : exact string match base_beg : prefix match base_dir : subdir match base_dom : domain match base_end : suffix match base_len : length match base_reg : regex match base_sub : substring match ``` **base32** : integer ``` This returns a 32-bit hash of the value returned by the "[base](#base)" fetch method above. This is useful to track per-URL activity on high traffic sites without having to store all URLs. Instead a shorter hash is stored, saving a lot of memory. The output type is an unsigned integer. The hash function used is SDBM with full avalanche on the output. Technically, base32 is exactly equal to "base,sdbm(1)". ``` **base32+src** : binary ``` This returns the concatenation of the base32 fetch above and the src fetch below. The resulting type is of type binary, with a size of 8 or 20 bytes depending on the source address family. This can be used to track per-IP, per-URL counters. ``` **baseq** : string ``` This returns the concatenation of the first Host header and the path part of the request with the query-string, which starts at the first slash. Using this instead of "[base](#base)" allows one to properly identify the target resource, for statistics or caching use cases. See also "[path](#path)", "[pathq](#pathq)" and "[base](#base)". ``` **capture.req.hdr**(<idx>) : string ``` This extracts the content of the header captured by the "capture request header", idx is the position of the capture keyword in the configuration. ``` **See also:** "[capture request header](#capture%20request%20header)". **capture.req.method** : string ``` This extracts the METHOD of an HTTP request. It can be used in both request and response. Unlike "[method](#method)", it can be used in both request and response because it's allocated. ``` **capture.req.uri** : string ``` This extracts the request's URI, which starts at the first slash and ends before the first space in the request (without the host part). Unlike "[path](#path)" and "[url](#url)", it can be used in both request and response because it's allocated. ``` **capture.req.ver** : string ``` This extracts the request's HTTP version and returns either "HTTP/1.0" or "HTTP/1.1". Unlike "[req.ver](#req.ver)", it can be used in both request, response, and logs because it relies on a persistent flag. ``` **capture.res.hdr**(<idx>) : string ``` This extracts the content of the header captured by the "capture response header", idx is the position of the capture keyword in the configuration. The first entry is an index of 0. ``` **See also:** "[capture response header](#capture%20response%20header)" **capture.res.ver** : string ``` This extracts the response's HTTP version and returns either "HTTP/1.0" or "HTTP/1.1". Unlike "[res.ver](#res.ver)", it can be used in logs because it relies on a persistent flag. ``` **req.body** : binary ``` This returns the HTTP request's available body as a block of data. It is recommended to use "[option http-buffer-request](#option%20http-buffer-request)" to be sure to wait, as much as possible, for the request's body. ``` **req.body\_param**([<name>) : string ``` This fetch assumes that the body of the POST request is url-encoded. The user can check if the "content-type" contains the value "application/x-www-form-urlencoded". This extracts the first occurrence of the parameter <name> in the body, which ends before '&'. The parameter name is case-sensitive. If no name is given, any parameter will match, and the first one will be returned. The result is a string corresponding to the value of the parameter <name> as presented in the request body (no URL decoding is performed). Note that the ACL version of this fetch iterates over multiple parameters and will iteratively report all parameters values if no name is given. ``` **req.body\_len** : integer ``` This returns the length of the HTTP request's available body in bytes. It may be lower than the advertised length if the body is larger than the buffer. It is recommended to use "[option http-buffer-request](#option%20http-buffer-request)" to be sure to wait, as much as possible, for the request's body. ``` **req.body\_size** : integer ``` This returns the advertised length of the HTTP request's body in bytes. It will represent the advertised Content-Length header, or the size of the available data in case of chunked encoding. ``` **req.cook**([<name>]) : string **cook**([<name>]) : string (deprecated) ``` This extracts the last occurrence of the cookie name <name> on a "Cookie" header line from the request, and returns its value as string. If no name is specified, the first cookie value is returned. When used with ACLs, all matching cookies are evaluated. Spaces around the name and the value are ignored as requested by the Cookie header specification (RFC6265). The cookie name is case-sensitive. Empty cookies are valid, so an empty cookie may very well return an empty value if it is present. Use the "found" match to detect presence. Use the res.cook() variant for response cookies sent by the server. ACL derivatives : req.cook([<name>]) : exact string match req.cook_beg([<name>]) : prefix match req.cook_dir([<name>]) : subdir match req.cook_dom([<name>]) : domain match req.cook_end([<name>]) : suffix match req.cook_len([<name>]) : length match req.cook_reg([<name>]) : regex match req.cook_sub([<name>]) : substring match ``` **req.cook\_cnt**([<name>]) : integer **cook\_cnt**([<name>]) : integer (deprecated) ``` Returns an integer value representing the number of occurrences of the cookie <name> in the request, or all cookies if <name> is not specified. ``` **req.cook\_val**([<name>]) : integer **cook\_val**([<name>]) : integer (deprecated) ``` This extracts the last occurrence of the cookie name <name> on a "Cookie" header line from the request, and converts its value to an integer which is returned. If no name is specified, the first cookie value is returned. When used in ACLs, all matching names are iterated over until a value matches. ``` **cookie**([<name>]) : string (deprecated) ``` This extracts the last occurrence of the cookie name <name> on a "Cookie" header line from the request, or a "Set-Cookie" header from the response, and returns its value as a string. A typical use is to get multiple clients sharing a same profile use the same server. This can be similar to what "appsession" did with the "request-learn" statement, but with support for multi-peer synchronization and state keeping across restarts. If no name is specified, the first cookie value is returned. This fetch should not be used anymore and should be replaced by req.cook() or res.cook() instead as it ambiguously uses the direction based on the context where it is used. ``` **hdr**([<name>[,<occ>]]) : string ``` This is equivalent to req.hdr() when used on requests, and to res.hdr() when used on responses. Please refer to these respective fetches for more details. In case of doubt about the fetch direction, please use the explicit ones. Note that contrary to the hdr() sample fetch method, the hdr_* ACL keywords unambiguously apply to the request headers. ``` **req.fhdr**(<name>[,<occ>]) : string ``` This returns the full value of the last occurrence of header <name> in an HTTP request. It differs from req.hdr() in that any commas present in the value are returned and are not used as delimiters. This is sometimes useful with headers such as User-Agent. When used from an ACL, all occurrences are iterated over until a match is found. Optionally, a specific occurrence might be specified as a position number. Positive values indicate a position from the first occurrence, with 1 being the first one. Negative values indicate positions relative to the last one, with -1 being the last one. ``` **req.fhdr\_cnt**([<name>]) : integer ``` Returns an integer value representing the number of occurrences of request header field name <name>, or the total number of header fields if <name> is not specified. Like req.fhdr() it differs from res.hdr_cnt() by not splitting headers at commas. ``` **req.hdr**([<name>[,<occ>]]) : string ``` This returns the last comma-separated value of the header <name> in an HTTP request. The fetch considers any comma as a delimiter for distinct values. This is useful if you need to process headers that are defined to be a list of values, such as Accept, or X-Forwarded-For. If full-line headers are desired instead, use req.fhdr(). Please carefully check RFC 7231 to know how certain headers are supposed to be parsed. Also, some of them are case insensitive (e.g. Connection). When used from an ACL, all occurrences are iterated over until a match is found. Optionally, a specific occurrence might be specified as a position number. Positive values indicate a position from the first occurrence, with 1 being the first one. Negative values indicate positions relative to the last one, with -1 being the last one. A typical use is with the X-Forwarded-For header once converted to IP, associated with an IP stick-table. ACL derivatives : hdr([<name>[,<occ>]]) : exact string match hdr_beg([<name>[,<occ>]]) : prefix match hdr_dir([<name>[,<occ>]]) : subdir match hdr_dom([<name>[,<occ>]]) : domain match hdr_end([<name>[,<occ>]]) : suffix match hdr_len([<name>[,<occ>]]) : length match hdr_reg([<name>[,<occ>]]) : regex match hdr_sub([<name>[,<occ>]]) : substring match ``` **req.hdr\_cnt**([<name>]) : integer **hdr\_cnt**([<header>]) : integer (deprecated) ``` Returns an integer value representing the number of occurrences of request header field name <name>, or the total number of header field values if <name> is not specified. Like req.hdr() it counts each comma separated part of the header's value. If counting of full-line headers is desired, then req.fhdr_cnt() should be used instead. With ACLs, it can be used to detect presence, absence or abuse of a specific header, as well as to block request smuggling attacks by rejecting requests which contain more than one of certain headers. Refer to req.hdr() for more information on header matching. ``` **req.hdr\_ip**([<name>[,<occ>]]) : ip **hdr\_ip**([<name>[,<occ>]]) : ip (deprecated) ``` This extracts the last occurrence of header <name> in an HTTP request, converts it to an IPv4 or IPv6 address and returns this address. When used with ACLs, all occurrences are checked, and if <name> is omitted, every value of every header is checked. The parser strictly adheres to the format described in RFC7239, with the extension that IPv4 addresses may optionally be followed by a colon (':') and a valid decimal port number (0 to 65535), which will be silently dropped. All other forms will not match and will cause the address to be ignored. The <occ> parameter is processed as with req.hdr(). A typical use is with the X-Forwarded-For and X-Client-IP headers. ``` **req.hdr\_val**([<name>[,<occ>]]) : integer **hdr\_val**([<name>[,<occ>]]) : integer (deprecated) ``` This extracts the last occurrence of header <name> in an HTTP request, and converts it to an integer value. When used with ACLs, all occurrences are checked, and if <name> is omitted, every value of every header is checked. The <occ> parameter is processed as with req.hdr(). A typical use is with the X-Forwarded-For header. ``` **req.hdrs** : string ``` Returns the current request headers as string including the last empty line separating headers from the request body. The last empty line can be used to detect a truncated header block. This sample fetch is useful for some SPOE headers analyzers and for advanced logging. ``` **req.hdrs\_bin** : binary ``` Returns the current request headers contained in preparsed binary form. This is useful for offloading some processing with SPOE. Each string is described by a length followed by the number of bytes indicated in the length. The length is represented using the variable integer encoding detailed in the SPOE documentation. The end of the list is marked by a couple of empty header names and values (length of 0 for both). *(<str:header-name><str:header-value>)<empty string><empty string> int: refer to the SPOE documentation for the encoding str: <int:length><bytes> ``` **http\_auth**(<userlist>) : boolean ``` Returns a boolean indicating whether the authentication data received from the client match a username & password stored in the specified userlist. This fetch function is not really useful outside of ACLs. Currently only http basic auth is supported. ``` **http\_auth\_bearer**([<header>]) : string ``` Returns the client-provided token found in the authorization data when the Bearer scheme is used (to send JSON Web Tokens for instance). No check is performed on the data sent by the client. If a specific <header> is supplied, it will parse this header instead of the Authorization one. ``` **http\_auth\_group**(<userlist>) : string ``` Returns a string corresponding to the user name found in the authentication data received from the client if both the user name and password are valid according to the specified userlist. The main purpose is to use it in ACLs where it is then checked whether the user belongs to any group within a list. This fetch function is not really useful outside of ACLs. Currently only http basic auth is supported. ACL derivatives : http_auth_group(<userlist>) : group ... Returns true when the user extracted from the request and whose password is valid according to the specified userlist belongs to at least one of the groups. ``` **http\_auth\_pass** : string ``` Returns the user's password found in the authentication data received from the client, as supplied in the Authorization header. Not checks are performed by this sample fetch. Only Basic authentication is supported. ``` **http\_auth\_type** : string ``` Returns the authentication method found in the authentication data received from the client, as supplied in the Authorization header. Not checks are performed by this sample fetch. Only Basic authentication is supported. ``` **http\_auth\_user** : string ``` Returns the user name found in the authentication data received from the client, as supplied in the Authorization header. Not checks are performed by this sample fetch. Only Basic authentication is supported. ``` **http\_first\_req** : boolean ``` Returns true when the request being processed is the first one of the connection. This can be used to add or remove headers that may be missing from some requests when a request is not the first one, or to help grouping requests in the logs. ``` **method** : integer + string ``` Returns an integer value corresponding to the method in the HTTP request. For example, "GET" equals 1 (check sources to establish the matching). Value 9 means "other method" and may be converted to a string extracted from the stream. This should not be used directly as a sample, this is only meant to be used from ACLs, which transparently convert methods from patterns to these integer + string values. Some predefined ACL already check for most common methods. ACL derivatives : method : case insensitive method match ``` Example : ``` # only accept GET and HEAD requests acl valid_method method GET HEAD http-request deny if ! valid_method ``` **path** : string ``` This extracts the request's URL path, which starts at the first slash and ends before the question mark (without the host part). A typical use is with prefetch-capable caches, and with portals which need to aggregate multiple information from databases and keep them in caches. Note that with outgoing caches, it would be wiser to use "[url](#url)" instead. With ACLs, it's typically used to match exact file names (e.g. "/login.php"), or directory parts using the derivative forms. See also the "[url](#url)" and "[base](#base)" fetch methods. ACL derivatives : path : exact string match path_beg : prefix match path_dir : subdir match path_dom : domain match path_end : suffix match path_len : length match path_reg : regex match path_sub : substring match ``` **pathq** : string ``` This extracts the request's URL path with the query-string, which starts at the first slash. This sample fetch is pretty handy to always retrieve a relative URI, excluding the scheme and the authority part, if any. Indeed, while it is the common representation for an HTTP/1.1 request target, in HTTP/2, an absolute URI is often used. This sample fetch will return the same result in both cases. ``` **query** : string ``` This extracts the request's query string, which starts after the first question mark. If no question mark is present, this fetch returns nothing. If a question mark is present but nothing follows, it returns an empty string. This means it's possible to easily know whether a query string is present using the "found" matching method. This fetch is the complement of "[path](#path)" which stops before the question mark. ``` **req.hdr\_names**([<delim>]) : string ``` This builds a string made from the concatenation of all header names as they appear in the request when the rule is evaluated. The default delimiter is the comma (',') but it may be overridden as an optional argument <delim>. In this case, only the first character of <delim> is considered. ``` **req.ver** : string **req\_ver** : string (deprecated) ``` Returns the version string from the HTTP request, for example "1.1". This can be useful for logs, but is mostly there for ACL. Some predefined ACL already check for versions 1.0 and 1.1. ACL derivatives : req.ver : exact string match ``` **res.body** : binary ``` This returns the HTTP response's available body as a block of data. Unlike the request side, there is no directive to wait for the response's body. This sample fetch is really useful (and usable) in the health-check context. It may be used in tcp-check based expect rules. ``` **res.body\_len** : integer ``` This returns the length of the HTTP response available body in bytes. Unlike the request side, there is no directive to wait for the response's body. This sample fetch is really useful (and usable) in the health-check context. It may be used in tcp-check based expect rules. ``` **res.body\_size** : integer ``` This returns the advertised length of the HTTP response body in bytes. It will represent the advertised Content-Length header, or the size of the available data in case of chunked encoding. Unlike the request side, there is no directive to wait for the response body. This sample fetch is really useful (and usable) in the health-check context. It may be used in tcp-check based expect rules. ``` **res.cache\_hit** : boolean ``` Returns the boolean "true" value if the response has been built out of an HTTP cache entry, otherwise returns boolean "false". ``` **res.cache\_name** : string ``` Returns a string containing the name of the HTTP cache that was used to build the HTTP response if res.cache_hit is true, otherwise returns an empty string. ``` **res.comp** : boolean ``` Returns the boolean "true" value if the response has been compressed by HAProxy, otherwise returns boolean "false". This may be used to add information in the logs. ``` **res.comp\_algo** : string ``` Returns a string containing the name of the algorithm used if the response was compressed by HAProxy, for example : "deflate". This may be used to add some information in the logs. ``` **res.cook**([<name>]) : string **scook**([<name>]) : string (deprecated) ``` This extracts the last occurrence of the cookie name <name> on a "Set-Cookie" header line from the response, and returns its value as string. If no name is specified, the first cookie value is returned. It may be used in tcp-check based expect rules. ACL derivatives : res.scook([<name>] : exact string match ``` **res.cook\_cnt**([<name>]) : integer **scook\_cnt**([<name>]) : integer (deprecated) ``` Returns an integer value representing the number of occurrences of the cookie <name> in the response, or all cookies if <name> is not specified. This is mostly useful when combined with ACLs to detect suspicious responses. It may be used in tcp-check based expect rules. ``` **res.cook\_val**([<name>]) : integer **scook\_val**([<name>]) : integer (deprecated) ``` This extracts the last occurrence of the cookie name <name> on a "Set-Cookie" header line from the response, and converts its value to an integer which is returned. If no name is specified, the first cookie value is returned. It may be used in tcp-check based expect rules. ``` **res.fhdr**([<name>[,<occ>]]) : string ``` This fetch works like the req.fhdr() fetch with the difference that it acts on the headers within an HTTP response. Like req.fhdr() the res.fhdr() fetch returns full values. If the header is defined to be a list you should use res.hdr(). This fetch is sometimes useful with headers such as Date or Expires. It may be used in tcp-check based expect rules. ``` **res.fhdr\_cnt**([<name>]) : integer ``` This fetch works like the req.fhdr_cnt() fetch with the difference that it acts on the headers within an HTTP response. Like req.fhdr_cnt() the res.fhdr_cnt() fetch acts on full values. If the header is defined to be a list you should use res.hdr_cnt(). It may be used in tcp-check based expect rules. ``` **res.hdr**([<name>[,<occ>]]) : string **shdr**([<name>[,<occ>]]) : string (deprecated) ``` This fetch works like the req.hdr() fetch with the difference that it acts on the headers within an HTTP response. Like req.hdr() the res.hdr() fetch considers the comma to be a delimiter. If this is not desired res.fhdr() should be used. It may be used in tcp-check based expect rules. ACL derivatives : res.hdr([<name>[,<occ>]]) : exact string match res.hdr_beg([<name>[,<occ>]]) : prefix match res.hdr_dir([<name>[,<occ>]]) : subdir match res.hdr_dom([<name>[,<occ>]]) : domain match res.hdr_end([<name>[,<occ>]]) : suffix match res.hdr_len([<name>[,<occ>]]) : length match res.hdr_reg([<name>[,<occ>]]) : regex match res.hdr_sub([<name>[,<occ>]]) : substring match ``` **res.hdr\_cnt**([<name>]) : integer **shdr\_cnt**([<name>]) : integer (deprecated) ``` This fetch works like the req.hdr_cnt() fetch with the difference that it acts on the headers within an HTTP response. Like req.hdr_cnt() the res.hdr_cnt() fetch considers the comma to be a delimiter. If this is not desired res.fhdr_cnt() should be used. It may be used in tcp-check based expect rules. ``` **res.hdr\_ip**([<name>[,<occ>]]) : ip **shdr\_ip**([<name>[,<occ>]]) : ip (deprecated) ``` This fetch works like the req.hdr_ip() fetch with the difference that it acts on the headers within an HTTP response. This can be useful to learn some data into a stick table. It may be used in tcp-check based expect rules. ``` **res.hdr\_names**([<delim>]) : string ``` This builds a string made from the concatenation of all header names as they appear in the response when the rule is evaluated. The default delimiter is the comma (',') but it may be overridden as an optional argument <delim>. In this case, only the first character of <delim> is considered. It may be used in tcp-check based expect rules. ``` **res.hdr\_val**([<name>[,<occ>]]) : integer **shdr\_val**([<name>[,<occ>]]) : integer (deprecated) ``` This fetch works like the req.hdr_val() fetch with the difference that it acts on the headers within an HTTP response. This can be useful to learn some data into a stick table. It may be used in tcp-check based expect rules. ``` **res.hdrs** : string ``` Returns the current response headers as string including the last empty line separating headers from the request body. The last empty line can be used to detect a truncated header block. This sample fetch is useful for some SPOE headers analyzers and for advanced logging. It may also be used in tcp-check based expect rules. ``` **res.hdrs\_bin** : binary ``` Returns the current response headers contained in preparsed binary form. This is useful for offloading some processing with SPOE. It may be used in tcp-check based expect rules. Each string is described by a length followed by the number of bytes indicated in the length. The length is represented using the variable integer encoding detailed in the SPOE documentation. The end of the list is marked by a couple of empty header names and values (length of 0 for both). *(<str:header-name><str:header-value>)<empty string><empty string> int: refer to the SPOE documentation for the encoding str: <int:length><bytes> ``` **res.ver** : string **resp\_ver** : string (deprecated) ``` Returns the version string from the HTTP response, for example "1.1". This can be useful for logs, but is mostly there for ACL. It may be used in tcp-check based expect rules. ACL derivatives : resp.ver : exact string match ``` **set-cookie**([<name>]) : string (deprecated) ``` This extracts the last occurrence of the cookie name <name> on a "Set-Cookie" header line from the response and uses the corresponding value to match. This can be comparable to what "appsession" did with default options, but with support for multi-peer synchronization and state keeping across restarts. This fetch function is deprecated and has been superseded by the "[res.cook](#res.cook)" fetch. This keyword will disappear soon. ``` **status** : integer ``` Returns an integer containing the HTTP status code in the HTTP response, for example, 302. It is mostly used within ACLs and integer ranges, for example, to remove any Location header if the response is not a 3xx. It may be used in tcp-check based expect rules. ``` **unique-id** : string ``` Returns the unique-id attached to the request. The directive "[unique-id-format](#unique-id-format)" must be set. If it is not set, the unique-id sample fetch fails. Note that the unique-id is usually used with HTTP requests, however this sample fetch can be used with other protocols. Obviously, if it is used with other protocols than HTTP, the unique-id-format directive must not contain HTTP parts. See: unique-id-format and unique-id-header ``` **url** : string ``` This extracts the request's URL as presented in the request. A typical use is with prefetch-capable caches, and with portals which need to aggregate multiple information from databases and keep them in caches. With ACLs, using "[path](#path)" is preferred over using "[url](#url)", because clients may send a full URL as is normally done with proxies. The only real use is to match "*" which does not match in "[path](#path)", and for which there is already a predefined ACL. See also "[path](#path)" and "[base](#base)". ACL derivatives : url : exact string match url_beg : prefix match url_dir : subdir match url_dom : domain match url_end : suffix match url_len : length match url_reg : regex match url_sub : substring match ``` **url\_ip** : ip ``` This extracts the IP address from the request's URL when the host part is presented as an IP address. Its use is very limited. For instance, a monitoring system might use this field as an alternative for the source IP in order to test what path a given source address would follow, or to force an entry in a table for a given source address. It may be used in combination with 'http-request set-dst' to emulate the older 'option http_proxy'. ``` **url\_port** : integer ``` This extracts the port part from the request's URL. Note that if the port is not specified in the request, port 80 is assumed.. ``` **urlp**([<name>[,<delim>]]) : string **url\_param**([<name>[,<delim>]]) : string ``` This extracts the first occurrence of the parameter <name> in the query string, which begins after either '?' or <delim>, and which ends before '&', ';' or <delim>. The parameter name is case-sensitive. If no name is given, any parameter will match, and the first one will be returned. The result is a string corresponding to the value of the parameter <name> as presented in the request (no URL decoding is performed). This can be used for session stickiness based on a client ID, to extract an application cookie passed as a URL parameter, or in ACLs to apply some checks. Note that the ACL version of this fetch iterates over multiple parameters and will iteratively report all parameters values if no name is given ACL derivatives : urlp(<name>[,<delim>]) : exact string match urlp_beg(<name>[,<delim>]) : prefix match urlp_dir(<name>[,<delim>]) : subdir match urlp_dom(<name>[,<delim>]) : domain match urlp_end(<name>[,<delim>]) : suffix match urlp_len(<name>[,<delim>]) : length match urlp_reg(<name>[,<delim>]) : regex match urlp_sub(<name>[,<delim>]) : substring match ``` Example : ``` # match http://example.com/foo?PHPSESSIONID=some\_id stick on urlp(PHPSESSIONID) # match http://example.com/foo;JSESSIONID=some\_id stick on urlp(JSESSIONID,;) ``` **urlp\_val**([<name>[,<delim>]]) : integer ``` See "[urlp](#urlp)" above. This one extracts the URL parameter <name> in the request and converts it to an integer value. This can be used for session stickiness based on a user ID for example, or with ACLs to match a page number or price. ``` **url32** : integer ``` This returns a 32-bit hash of the value obtained by concatenating the first Host header and the whole URL including parameters (not only the path part of the request, as in the "[base32](#base32)" fetch above). This is useful to track per-URL activity. A shorter hash is stored, saving a lot of memory. The output type is an unsigned integer. ``` **url32+src** : binary ``` This returns the concatenation of the "[url32](#url32)" fetch and the "[src](#src)" fetch. The resulting type is of type binary, with a size of 8 or 20 bytes depending on the source address family. This can be used to track per-IP, per-URL counters. ``` #### 7.3.7. Fetching samples for developers ``` This set of sample fetch methods is reserved to developers and must never be used on a production environment, except on developer demand, for debugging purposes. Moreover, no special care will be taken on backwards compatibility. There is no warranty the following sample fetches will never change, be renamed or simply removed. So be really careful if you should use one of them. To avoid any ambiguity, these sample fetches are placed in the dedicated scope "internal", for instance "[internal.strm.is\_htx](#internal.strm.is_htx)". ``` **internal.htx.data** : integer ``` Returns the size in bytes used by data in the HTX message associated to a channel. The channel is chosen depending on the sample direction. ``` **internal.htx.free** : integer ``` Returns the free space (size - used) in bytes in the HTX message associated to a channel. The channel is chosen depending on the sample direction. ``` **internal.htx.free\_data** : integer ``` Returns the free space for the data in bytes in the HTX message associated to a channel. The channel is chosen depending on the sample direction. ``` **internal.htx.has\_eom** : boolean ``` Returns true if the HTX message associated to a channel contains the end-of-message flag (EOM). Otherwise, it returns false. The channel is chosen depending on the sample direction. ``` **internal.htx.nbblks** : integer ``` Returns the number of blocks present in the HTX message associated to a channel. The channel is chosen depending on the sample direction. ``` **internal.htx.size** : integer ``` Returns the total size in bytes of the HTX message associated to a channel. The channel is chosen depending on the sample direction. ``` **internal.htx.used** : integer ``` Returns the total size used in bytes (data + metadata) in the HTX message associated to a channel. The channel is chosen depending on the sample direction. ``` **internal.htx\_blk.size**(<idx>) : integer ``` Returns the size of the block at the position <idx> in the HTX message associated to a channel or 0 if it does not exist. The channel is chosen depending on the sample direction. <idx> may be any positive integer or one of the special value : * head : The oldest inserted block * tail : The newest inserted block * first : The first block where to (re)start the analysis ``` **internal.htx\_blk.type**(<idx>) : string ``` Returns the type of the block at the position <idx> in the HTX message associated to a channel or "HTX_BLK_UNUSED" if it does not exist. The channel is chosen depending on the sample direction. <idx> may be any positive integer or one of the special value : * head : The oldest inserted block * tail : The newest inserted block * first : The first block where to (re)start the analysis ``` **internal.htx\_blk.data**(<idx>) : binary ``` Returns the value of the DATA block at the position <idx> in the HTX message associated to a channel or an empty string if it does not exist or if it is not a DATA block. The channel is chosen depending on the sample direction. <idx> may be any positive integer or one of the special value : * head : The oldest inserted block * tail : The newest inserted block * first : The first block where to (re)start the analysis ``` **internal.htx\_blk.hdrname**(<idx>) : string ``` Returns the header name of the HEADER block at the position <idx> in the HTX message associated to a channel or an empty string if it does not exist or if it is not an HEADER block. The channel is chosen depending on the sample direction. <idx> may be any positive integer or one of the special value : * head : The oldest inserted block * tail : The newest inserted block * first : The first block where to (re)start the analysis ``` **internal.htx\_blk.hdrval**(<idx>) : string ``` Returns the header value of the HEADER block at the position <idx> in the HTX message associated to a channel or an empty string if it does not exist or if it is not an HEADER block. The channel is chosen depending on the sample direction. <idx> may be any positive integer or one of the special value : * head : The oldest inserted block * tail : The newest inserted block * first : The first block where to (re)start the analysis ``` **internal.htx\_blk.start\_line**(<idx>) : string ``` Returns the value of the REQ_SL or RES_SL block at the position <idx> in the HTX message associated to a channel or an empty string if it does not exist or if it is not a SL block. The channel is chosen depending on the sample direction. <idx> may be any positive integer or one of the special value : * head : The oldest inserted block * tail : The newest inserted block * first : The first block where to (re)start the analysis ``` **internal.strm.is\_htx** : boolean ``` Returns true if the current stream is an HTX stream. It means the data in the channels buffers are stored using the internal HTX representation. Otherwise, it returns false. ``` ### 7.4. Pre-defined ACLs ``` Some predefined ACLs are hard-coded so that they do not have to be declared in every frontend which needs them. They all have their names in upper case in order to avoid confusion. Their equivalence is provided below. ``` | ACL name | Equivalent to Usage | | | --- | --- | --- | | FALSE | always\_false | never match | | HTTP | req.proto\_http | match if request protocol is valid HTTP | | HTTP\_1.0 | req.ver 1.0 | match if HTTP request version is 1.0 | | HTTP\_1.1 | req.ver 1.1 | match if HTTP request version is 1.1 | | HTTP\_2.0 | req.ver 2.0 | match if HTTP request version is 2.0 | | HTTP\_CONTENT | req.hdr\_val(content-length) gt 0 | match an existing content-length in the HTTP request | | HTTP\_URL\_ABS | url\_reg ^[^/:]\*:// | match absolute URL with scheme | | HTTP\_URL\_SLASH | url\_beg / | match URL beginning with "/" | | HTTP\_URL\_STAR | url \* | match URL equal to "\*" | | LOCALHOST | src 127.0.0.1/8 ::1 | match connection from local host | | METH\_CONNECT | method CONNECT | match HTTP CONNECT method | | METH\_DELETE | method DELETE | match HTTP DELETE method | | METH\_GET | method GET HEAD | match HTTP GET or HEAD method | | METH\_HEAD | method HEAD | match HTTP HEAD method | | METH\_OPTIONS | method OPTIONS | match HTTP OPTIONS method | | METH\_POST | method POST | match HTTP POST method | | METH\_PUT | method PUT | match HTTP PUT method | | METH\_TRACE | method TRACE | match HTTP TRACE method | | RDP\_COOKIE | req.rdp\_cookie\_cnt gt 0 | match presence of an RDP cookie in the request buffer | | REQ\_CONTENT | req.len gt 0 | match data in the request buffer | | TRUE | always\_true | always match | | WAIT\_END | wait\_end | wait for end of content analysis | 8. Logging ----------- ``` One of HAProxy's strong points certainly lies is its precise logs. It probably provides the finest level of information available for such a product, which is very important for troubleshooting complex environments. Standard information provided in logs include client ports, TCP/HTTP state timers, precise session state at termination and precise termination cause, information about decisions to direct traffic to a server, and of course the ability to capture arbitrary headers. In order to improve administrators reactivity, it offers a great transparency about encountered problems, both internal and external, and it is possible to send logs to different sources at the same time with different level filters : - global process-level logs (system errors, start/stop, etc..) - per-instance system and internal errors (lack of resource, bugs, ...) - per-instance external troubles (servers up/down, max connections) - per-instance activity (client connections), either at the establishment or at the termination. - per-request control of log-level, e.g. http-request set-log-level silent if sensitive_request The ability to distribute different levels of logs to different log servers allow several production teams to interact and to fix their problems as soon as possible. For example, the system team might monitor system-wide errors, while the application team might be monitoring the up/down for their servers in real time, and the security team might analyze the activity logs with one hour delay. ``` ### 8.1. Log levels ``` TCP and HTTP connections can be logged with information such as the date, time, source IP address, destination address, connection duration, response times, HTTP request, HTTP return code, number of bytes transmitted, conditions in which the session ended, and even exchanged cookies values. For example track a particular user's problems. All messages may be sent to up to two syslog servers. Check the "log" keyword in [section 4.2](#4.2) for more information about log facilities. ``` ### 8.2. Log formats ``` HAProxy supports 5 log formats. Several fields are common between these formats and will be detailed in the following sections. A few of them may vary slightly with the configuration, due to indicators specific to certain options. The supported formats are as follows : - the default format, which is very basic and very rarely used. It only provides very basic information about the incoming connection at the moment it is accepted : source IP:port, destination IP:port, and frontend-name. This mode will eventually disappear so it will not be described to great extents. - the TCP format, which is more advanced. This format is enabled when "option tcplog" is set on the frontend. HAProxy will then usually wait for the connection to terminate before logging. This format provides much richer information, such as timers, connection counts, queue size, etc... This format is recommended for pure TCP proxies. - the HTTP format, which is the most advanced for HTTP proxying. This format is enabled when "[option httplog](#option%20httplog)" is set on the frontend. It provides the same information as the TCP format with some HTTP-specific fields such as the request, the status code, and captures of headers and cookies. This format is recommended for HTTP proxies. - the CLF HTTP format, which is equivalent to the HTTP format, but with the fields arranged in the same order as the CLF format. In this mode, all timers, captures, flags, etc... appear one per field after the end of the common fields, in the same order they appear in the standard HTTP format. - the custom log format, allows you to make your own log line. Next sections will go deeper into details for each of these formats. Format specification will be performed on a "[field](#field)" basis. Unless stated otherwise, a field is a portion of text delimited by any number of spaces. Since syslog servers are susceptible of inserting fields at the beginning of a line, it is always assumed that the first field is the one containing the process name and identifier. Note : Since log lines may be quite long, the log examples in sections below might be broken into multiple lines. The example log lines will be prefixed with 3 closing angle brackets ('>>>') and each time a log is broken into multiple lines, each non-final line will end with a backslash ('\') and the next line will start indented by two characters. ``` #### 8.2.1. Default log format ``` This format is used when no specific option is set. The log is emitted as soon as the connection is accepted. One should note that this currently is the only format which logs the request's destination IP and ports. ``` Example : ``` listen www mode http log global server srv1 127.0.0.1:8000 >>> Feb 6 12:12:09 localhost \ haproxy[14385]: Connect from 10.0.1.2:33312 to 10.0.3.31:8012 \ (www/HTTP) ``` ``` Field Format Extract from the example above 1 process_name '[' pid ']:' haproxy[14385]: 2 'Connect from' Connect from 3 source_ip ':' source_port 10.0.1.2:33312 4 'to' to 5 destination_ip ':' destination_port 10.0.3.31:8012 6 '(' frontend_name '/' mode ')' (www/HTTP) Detailed fields description : - "source_ip" is the IP address of the client which initiated the connection. - "source_port" is the TCP port of the client which initiated the connection. - "destination_ip" is the IP address the client connected to. - "destination_port" is the TCP port the client connected to. - "frontend_name" is the name of the frontend (or listener) which received and processed the connection. - "mode is the mode the frontend is operating (TCP or HTTP). In case of a UNIX socket, the source and destination addresses are marked as "unix:" and the ports reflect the internal ID of the socket which accepted the connection (the same ID as reported in the stats). It is advised not to use this deprecated format for newer installations as it will eventually disappear. ``` #### 8.2.2. TCP log format ``` The TCP format is used when "[option tcplog](#option%20tcplog)" is specified in the frontend, and is the recommended format for pure TCP proxies. It provides a lot of precious information for troubleshooting. Since this format includes timers and byte counts, the log is normally emitted at the end of the session. It can be emitted earlier if "[option logasap](#option%20logasap)" is specified, which makes sense in most environments with long sessions such as remote terminals. Sessions which match the "[monitor](#monitor)" rules are never logged. It is also possible not to emit logs for sessions for which no data were exchanged between the client and the server, by specifying "[option dontlognull](#option%20dontlognull)" in the frontend. Successful connections will not be logged if "[option dontlog-normal](#option%20dontlog-normal)" is specified in the frontend. The TCP log format is internally declared as a custom log format based on the exact following string, which may also be used as a basis to extend the format if required. Refer to [section 8.2.6](#8.2.6) "Custom log format" to see how to use this: # strict equivalent of "[option tcplog](#option%20tcplog)" log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts \ %ac/%fc/%bc/%sc/%rc %sq/%bq" A few fields may slightly vary depending on some configuration options, those are marked with a star ('*') after the field name below. ``` Example : ``` frontend fnt mode tcp option tcplog log global default_backend bck backend bck server srv1 127.0.0.1:8000 >>> Feb 6 12:12:56 localhost \ haproxy[14387]: 10.0.1.2:33313 [06/Feb/2009:12:12:51.443] fnt \ bck/srv1 0/0/5007 212 -- 0/0/0/0/3 0/0 ``` ``` Field Format Extract from the example above 1 process_name '[' pid ']:' haproxy[14387]: 2 client_ip ':' client_port 10.0.1.2:33313 3 '[' accept_date ']' [06/Feb/2009:12:12:51.443] 4 frontend_name fnt 5 backend_name '/' server_name bck/srv1 6 Tw '/' Tc '/' Tt* 0/0/5007 7 bytes_read* 212 8 termination_state -- 9 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 0/0/0/0/3 10 srv_queue '/' backend_queue 0/0 Detailed fields description : - "client_ip" is the IP address of the client which initiated the TCP connection to HAProxy. If the connection was accepted on a UNIX socket instead, the IP address would be replaced with the word "unix". Note that when the connection is accepted on a socket configured with "[accept-proxy](#accept-proxy)" and the PROXY protocol is correctly used, or with a "[accept-netscaler-cip](#accept-netscaler-cip)" and the NetScaler Client IP insertion protocol is correctly used, then the logs will reflect the forwarded connection's information. - "client_port" is the TCP port of the client which initiated the connection. If the connection was accepted on a UNIX socket instead, the port would be replaced with the ID of the accepting socket, which is also reported in the stats interface. - "accept_date" is the exact date when the connection was received by HAProxy (which might be very slightly different from the date observed on the network if there was some queuing in the system's backlog). This is usually the same date which may appear in any upstream firewall's log. When used in HTTP mode, the accept_date field will be reset to the first moment the connection is ready to receive a new request (end of previous response for HTTP/1, immediately after previous request for HTTP/2). - "frontend_name" is the name of the frontend (or listener) which received and processed the connection. - "backend_name" is the name of the backend (or listener) which was selected to manage the connection to the server. This will be the same as the frontend if no switching rule has been applied, which is common for TCP applications. - "server_name" is the name of the last server to which the connection was sent, which might differ from the first one if there were connection errors and a redispatch occurred. Note that this server belongs to the backend which processed the request. If the connection was aborted before reaching a server, "<NOSRV>" is indicated instead of a server name. - "Tw" is the total time in milliseconds spent waiting in the various queues. It can be "-1" if the connection was aborted before reaching the queue. See "Timers" below for more details. - "Tc" is the total time in milliseconds spent waiting for the connection to establish to the final server, including retries. It can be "-1" if the connection was aborted before a connection could be established. See "Timers" below for more details. - "Tt" is the total time in milliseconds elapsed between the accept and the last close. It covers all possible processing. There is one exception, if "[option logasap](#option%20logasap)" was specified, then the time counting stops at the moment the log is emitted. In this case, a '+' sign is prepended before the value, indicating that the final one will be larger. See "Timers" below for more details. - "bytes_read" is the total number of bytes transmitted from the server to the client when the log is emitted. If "[option logasap](#option%20logasap)" is specified, the this value will be prefixed with a '+' sign indicating that the final one may be larger. Please note that this value is a 64-bit counter, so log analysis tools must be able to handle it without overflowing. - "termination_state" is the condition the session was in when the session ended. This indicates the session state, which side caused the end of session to happen, and for what reason (timeout, error, ...). The normal flags should be "--", indicating the session was closed by either end with no data remaining in buffers. See below "Session state at disconnection" for more details. - "actconn" is the total number of concurrent connections on the process when the session was logged. It is useful to detect when some per-process system limits have been reached. For instance, if actconn is close to 512 when multiple connection errors occur, chances are high that the system limits the process to use a maximum of 1024 file descriptors and that all of them are used. See [section 3](#3) "Global parameters" to find how to tune the system. - "feconn" is the total number of concurrent connections on the frontend when the session was logged. It is useful to estimate the amount of resource required to sustain high loads, and to detect when the frontend's "maxconn" has been reached. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack. - "beconn" is the total number of concurrent connections handled by the backend when the session was logged. It includes the total number of concurrent connections active on servers as well as the number of connections pending in queues. It is useful to estimate the amount of additional servers needed to support high loads for a given application. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack. - "[srv\_conn](#srv_conn)" is the total number of concurrent connections still active on the server when the session was logged. It can never exceed the server's configured "maxconn" parameter. If this value is very often close or equal to the server's "maxconn", it means that traffic regulation is involved a lot, meaning that either the server's maxconn value is too low, or that there aren't enough servers to process the load with an optimal response time. When only one of the server's "[srv\_conn](#srv_conn)" is high, it usually means that this server has some trouble causing the connections to take longer to be processed than on other servers. - "[retries](#retries)" is the number of connection retries experienced by this session when trying to connect to the server. It must normally be zero, unless a server is being stopped at the same moment the connection was attempted. Frequent retries generally indicate either a network problem between HAProxy and the server, or a misconfigured system backlog on the server preventing new connections from being queued. This field may optionally be prefixed with a '+' sign, indicating that the session has experienced a redispatch after the maximal retry count has been reached on the initial server. In this case, the server name appearing in the log is the one the connection was redispatched to, and not the first one, though both may sometimes be the same in case of hashing for instance. So as a general rule of thumb, when a '+' is present in front of the retry count, this count should not be attributed to the logged server. - "srv\_queue" is the total number of requests which were processed before this one in the server queue. It is zero when the request has not gone through the server queue. It makes it possible to estimate the approximate server's response time by dividing the time spent in queue by the number of requests in the queue. It is worth noting that if a session experiences a redispatch and passes through two server queues, their positions will be cumulative. A request should not pass through both the server queue and the backend queue unless a redispatch occurs. - "backend_queue" is the total number of requests which were processed before this one in the backend's global queue. It is zero when the request has not gone through the global queue. It makes it possible to estimate the average queue length, which easily translates into a number of missing servers when divided by a server's "maxconn" parameter. It is worth noting that if a session experiences a redispatch, it may pass twice in the backend's queue, and then both positions will be cumulative. A request should not pass through both the server queue and the backend queue unless a redispatch occurs. ``` #### 8.2.3. HTTP log format ``` The HTTP format is the most complete and the best suited for HTTP proxies. It is enabled by when "[option httplog](#option%20httplog)" is specified in the frontend. It provides the same level of information as the TCP format with additional features which are specific to the HTTP protocol. Just like the TCP format, the log is usually emitted at the end of the session, unless "[option logasap](#option%20logasap)" is specified, which generally only makes sense for download sites. A session which matches the "[monitor](#monitor)" rules will never logged. It is also possible not to log sessions for which no data were sent by the client by specifying "[option dontlognull](#option%20dontlognull)" in the frontend. Successful connections will not be logged if "[option dontlog-normal](#option%20dontlog-normal)" is specified in the frontend. The HTTP log format is internally declared as a custom log format based on the exact following string, which may also be used as a basis to extend the format if required. Refer to [section 8.2.6](#8.2.6) "Custom log format" to see how to use this: # strict equivalent of "[option httplog](#option%20httplog)" log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \ %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r" And the CLF log format is internally declared as a custom log format based on this exact string: # strict equivalent of "option httplog clf" log-format "%{+Q}o %{-Q}ci - - [%trg] %r %ST %B \"\" \"\" %cp \ %ms %ft %b %s %TR %Tw %Tc %Tr %Ta %tsc %ac %fc \ %bc %sc %rc %sq %bq %CC %CS %hrl %hsl" Most fields are shared with the TCP log, some being different. A few fields may slightly vary depending on some configuration options. Those ones are marked with a star ('*') after the field name below. ``` Example : ``` frontend http-in mode http option httplog log global default_backend bck backend static server srv1 127.0.0.1:8000 >>> Feb 6 12:14:14 localhost \ haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \ static/srv1 10/0/30/69/109 200 2750 - - ---- 1/1/1/1/0 0/0 {1wt.eu} \ {} "GET /index.html HTTP/1.1" ``` ``` Field Format Extract from the example above 1 process_name '[' pid ']:' haproxy[14389]: 2 client_ip ':' client_port 10.0.1.2:33317 3 '[' request_date ']' [06/Feb/2009:12:14:14.655] 4 frontend_name http-in 5 backend_name '/' server_name static/srv1 6 TR '/' Tw '/' Tc '/' Tr '/' Ta* 10/0/30/69/109 7 status_code 200 8 bytes_read* 2750 9 captured_request_cookie - 10 captured_response_cookie - 11 termination_state ---- 12 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 1/1/1/1/0 13 srv_queue '/' backend_queue 0/0 14 '{' captured_request_headers* '}' {haproxy.1wt.eu} 15 '{' captured_response_headers* '}' {} 16 '"' http_request '"' "GET /index.html HTTP/1.1" Detailed fields description : - "client_ip" is the IP address of the client which initiated the TCP connection to HAProxy. If the connection was accepted on a UNIX socket instead, the IP address would be replaced with the word "unix". Note that when the connection is accepted on a socket configured with "[accept-proxy](#accept-proxy)" and the PROXY protocol is correctly used, or with a "[accept-netscaler-cip](#accept-netscaler-cip)" and the NetScaler Client IP insertion protocol is correctly used, then the logs will reflect the forwarded connection's information. - "client_port" is the TCP port of the client which initiated the connection. If the connection was accepted on a UNIX socket instead, the port would be replaced with the ID of the accepting socket, which is also reported in the stats interface. - "request_date" is the exact date when the first byte of the HTTP request was received by HAProxy (log field %tr). - "frontend_name" is the name of the frontend (or listener) which received and processed the connection. - "backend_name" is the name of the backend (or listener) which was selected to manage the connection to the server. This will be the same as the frontend if no switching rule has been applied. - "server_name" is the name of the last server to which the connection was sent, which might differ from the first one if there were connection errors and a redispatch occurred. Note that this server belongs to the backend which processed the request. If the request was aborted before reaching a server, "<NOSRV>" is indicated instead of a server name. If the request was intercepted by the stats subsystem, "<STATS>" is indicated instead. - "TR" is the total time in milliseconds spent waiting for a full HTTP request from the client (not counting body) after the first byte was received. It can be "-1" if the connection was aborted before a complete request could be received or a bad request was received. It should always be very small because a request generally fits in one single packet. Large times here generally indicate network issues between the client and HAProxy or requests being typed by hand. See [section 8.4](#8.4) "Timing Events" for more details. - "Tw" is the total time in milliseconds spent waiting in the various queues. It can be "-1" if the connection was aborted before reaching the queue. See [section 8.4](#8.4) "Timing Events" for more details. - "Tc" is the total time in milliseconds spent waiting for the connection to establish to the final server, including retries. It can be "-1" if the request was aborted before a connection could be established. See section 8.4 "Timing Events" for more details. - "Tr" is the total time in milliseconds spent waiting for the server to send a full HTTP response, not counting data. It can be "-1" if the request was aborted before a complete response could be received. It generally matches the server's processing time for the request, though it may be altered by the amount of data sent by the client to the server. Large times here on "GET" requests generally indicate an overloaded server. See [section 8.4](#8.4) "Timing Events" for more details. - "Ta" is the time the request remained active in HAProxy, which is the total time in milliseconds elapsed between the first byte of the request was received and the last byte of response was sent. It covers all possible processing except the handshake (see Th) and idle time (see Ti). There is one exception, if "[option logasap](#option%20logasap)" was specified, then the time counting stops at the moment the log is emitted. In this case, a '+' sign is prepended before the value, indicating that the final one will be larger. See [section 8.4](#8.4) "Timing Events" for more details. - "status_code" is the HTTP status code returned to the client. This status is generally set by the server, but it might also be set by HAProxy when the server cannot be reached or when its response is blocked by HAProxy. - "bytes_read" is the total number of bytes transmitted to the client when the log is emitted. This does include HTTP headers. If "[option logasap](#option%20logasap)" is specified, this value will be prefixed with a '+' sign indicating that the final one may be larger. Please note that this value is a 64-bit counter, so log analysis tools must be able to handle it without overflowing. - "captured_request_cookie" is an optional "name=value" entry indicating that the client had this cookie in the request. The cookie name and its maximum length are defined by the "[capture cookie](#capture%20cookie)" statement in the frontend configuration. The field is a single dash ('-') when the option is not set. Only one cookie may be captured, it is generally used to track session ID exchanges between a client and a server to detect session crossing between clients due to application bugs. For more details, please consult the section "Capturing HTTP headers and cookies" below. - "captured_response_cookie" is an optional "name=value" entry indicating that the server has returned a cookie with its response. The cookie name and its maximum length are defined by the "[capture cookie](#capture%20cookie)" statement in the frontend configuration. The field is a single dash ('-') when the option is not set. Only one cookie may be captured, it is generally used to track session ID exchanges between a client and a server to detect session crossing between clients due to application bugs. For more details, please consult the section "Capturing HTTP headers and cookies" below. - "termination_state" is the condition the session was in when the session ended. This indicates the session state, which side caused the end of session to happen, for what reason (timeout, error, ...), just like in TCP logs, and information about persistence operations on cookies in the last two characters. The normal flags should begin with "--", indicating the session was closed by either end with no data remaining in buffers. See below "Session state at disconnection" for more details. - "actconn" is the total number of concurrent connections on the process when the session was logged. It is useful to detect when some per-process system limits have been reached. For instance, if actconn is close to 512 or 1024 when multiple connection errors occur, chances are high that the system limits the process to use a maximum of 1024 file descriptors and that all of them are used. See [section 3](#3) "Global parameters" to find how to tune the system. - "feconn" is the total number of concurrent connections on the frontend when the session was logged. It is useful to estimate the amount of resource required to sustain high loads, and to detect when the frontend's "maxconn" has been reached. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack. - "beconn" is the total number of concurrent connections handled by the backend when the session was logged. It includes the total number of concurrent connections active on servers as well as the number of connections pending in queues. It is useful to estimate the amount of additional servers needed to support high loads for a given application. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack. - "[srv\_conn](#srv_conn)" is the total number of concurrent connections still active on the server when the session was logged. It can never exceed the server's configured "maxconn" parameter. If this value is very often close or equal to the server's "maxconn", it means that traffic regulation is involved a lot, meaning that either the server's maxconn value is too low, or that there aren't enough servers to process the load with an optimal response time. When only one of the server's "[srv\_conn](#srv_conn)" is high, it usually means that this server has some trouble causing the requests to take longer to be processed than on other servers. - "[retries](#retries)" is the number of connection retries experienced by this session when trying to connect to the server. It must normally be zero, unless a server is being stopped at the same moment the connection was attempted. Frequent retries generally indicate either a network problem between HAProxy and the server, or a misconfigured system backlog on the server preventing new connections from being queued. This field may optionally be prefixed with a '+' sign, indicating that the session has experienced a redispatch after the maximal retry count has been reached on the initial server. In this case, the server name appearing in the log is the one the connection was redispatched to, and not the first one, though both may sometimes be the same in case of hashing for instance. So as a general rule of thumb, when a '+' is present in front of the retry count, this count should not be attributed to the logged server. - "srv\_queue" is the total number of requests which were processed before this one in the server queue. It is zero when the request has not gone through the server queue. It makes it possible to estimate the approximate server's response time by dividing the time spent in queue by the number of requests in the queue. It is worth noting that if a session experiences a redispatch and passes through two server queues, their positions will be cumulative. A request should not pass through both the server queue and the backend queue unless a redispatch occurs. - "backend_queue" is the total number of requests which were processed before this one in the backend's global queue. It is zero when the request has not gone through the global queue. It makes it possible to estimate the average queue length, which easily translates into a number of missing servers when divided by a server's "maxconn" parameter. It is worth noting that if a session experiences a redispatch, it may pass twice in the backend's queue, and then both positions will be cumulative. A request should not pass through both the server queue and the backend queue unless a redispatch occurs. - "captured_request_headers" is a list of headers captured in the request due to the presence of the "[capture request header](#capture%20request%20header)" statement in the frontend. Multiple headers can be captured, they will be delimited by a vertical bar ('|'). When no capture is enabled, the braces do not appear, causing a shift of remaining fields. It is important to note that this field may contain spaces, and that using it requires a smarter log parser than when it's not used. Please consult the section "Capturing HTTP headers and cookies" below for more details. - "captured_response_headers" is a list of headers captured in the response due to the presence of the "[capture response header](#capture%20response%20header)" statement in the frontend. Multiple headers can be captured, they will be delimited by a vertical bar ('|'). When no capture is enabled, the braces do not appear, causing a shift of remaining fields. It is important to note that this field may contain spaces, and that using it requires a smarter log parser than when it's not used. Please consult the section "Capturing HTTP headers and cookies" below for more details. - "http_request" is the complete HTTP request line, including the method, request and HTTP version string. Non-printable characters are encoded (see below the section "Non-printable characters"). This is always the last field, and it is always delimited by quotes and is the only one which can contain quotes. If new fields are added to the log format, they will be added before this field. This field might be truncated if the request is huge and does not fit in the standard syslog buffer (1024 characters). This is the reason why this field must always remain the last one. ``` #### 8.2.4. HTTPS log format ``` The HTTPS format is the best suited for HTTP over SSL connections. It is an extension of the HTTP format (see [section 8.2.3](#8.2.3)) to which SSL related information are added. It is enabled when "[option httpslog](#option%20httpslog)" is specified in the frontend. Just like the TCP and HTTP formats, the log is usually emitted at the end of the session, unless "[option logasap](#option%20logasap)" is specified. A session which matches the "[monitor](#monitor)" rules will never logged. It is also possible not to log sessions for which no data were sent by the client by specifying "option dontlognull" in the frontend. Successful connections will not be logged if "[option dontlog-normal](#option%20dontlog-normal)" is specified in the frontend. The HTTPS log format is internally declared as a custom log format based on the exact following string, which may also be used as a basis to extend the format if required. Refer to [section 8.2.6](#8.2.6) "Custom log format" to see how to use this: # strict equivalent of "[option httpslog](#option%20httpslog)" log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \ %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r \ %[fc_err]/%[ssl_fc_err,hex]/%[ssl_c_err]/\ %[ssl_c_ca_err]/%[ssl_fc_is_resumed] %[ssl_fc_sni]/%sslv/%sslc" This format is basically the HTTP one (see [section 8.2.3](#8.2.3)) with new fields appended to it. The new fields (lines 17 and 18) will be detailed here. For the HTTP ones, refer to the HTTP section. ``` Example : ``` frontend https-in mode http option httpslog log global bind *:443 ssl crt mycerts/srv.pem ... default_backend bck backend static server srv1 127.0.0.1:8000 ssl crt mycerts/clt.pem ... >>> Feb 6 12:14:14 localhost \ haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] https-in \ static/srv1 10/0/30/69/109 200 2750 - - ---- 1/1/1/1/0 0/0 {1wt.eu} \ {} "GET /index.html HTTP/1.1" 0/0/0/0/0 \ 1wt.eu/TLSv1.3/TLS_AES_256_GCM_SHA384 ``` ``` Field Format Extract from the example above 1 process_name '[' pid ']:' haproxy[14389]: 2 client_ip ':' client_port 10.0.1.2:33317 3 '[' request_date ']' [06/Feb/2009:12:14:14.655] 4 frontend_name https-in 5 backend_name '/' server_name static/srv1 6 TR '/' Tw '/' Tc '/' Tr '/' Ta* 10/0/30/69/109 7 status_code 200 8 bytes_read* 2750 9 captured_request_cookie - 10 captured_response_cookie - 11 termination_state ---- 12 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 1/1/1/1/0 13 srv_queue '/' backend_queue 0/0 14 '{' captured_request_headers* '}' {haproxy.1wt.eu} 15 '{' captured_response_headers* '}' {} 16 '"' http_request '"' "GET /index.html HTTP/1.1" 17 fc_err '/' ssl_fc_err '/' ssl_c_err '/' ssl_c_ca_err '/' ssl_fc_is_resumed 0/0/0/0/0 18 ssl_fc_sni '/' ssl_version '/' ssl_ciphers 1wt.eu/TLSv1.3/TLS_AES_256_GCM_SHA384 Detailed fields description : - "[fc\_err](#fc_err)" is the status of the connection on the frontend's side. It corresponds to the "[fc\_err](#fc_err)" sample fetch. See the "[fc\_err](#fc_err)" and "[fc\_err\_str](#fc_err_str)" sample fetch functions for more information. - "[ssl\_fc\_err](#ssl_fc_err)" is the last error of the first SSL error stack that was raised on the connection from the frontend's perspective. It might be used to detect SSL handshake errors for instance. It will be 0 if everything went well. See the "[ssl\_fc\_err](#ssl_fc_err)" sample fetch's description for more information. - "[ssl\_c\_err](#ssl_c_err)" is the status of the client's certificate verification process. The handshake might be successful while having a non-null verification error code if it is an ignored one. See the "[ssl\_c\_err](#ssl_c_err)" sample fetch and the "[crt-ignore-err](#crt-ignore-err)" option. - "[ssl\_c\_ca\_err](#ssl_c_ca_err)" is the status of the client's certificate chain verification process. The handshake might be successful while having a non-null verification error code if it is an ignored one. See the "[ssl\_c\_ca\_err](#ssl_c_ca_err)" sample fetch and the "[ca-ignore-err](#ca-ignore-err)" option. - "[ssl\_fc\_is\_resumed](#ssl_fc_is_resumed)" is true if the incoming TLS session was resumed with the stateful cache or a stateless ticket. Don't forgot that a TLS session can be shared by multiple requests. - "[ssl\_fc\_sni](#ssl_fc_sni)" is the SNI (Server Name Indication) presented by the client to select the certificate to be used. It usually matches the host name for the first request of a connection. An absence of this field may indicate that the SNI was not sent by the client, and will lead haproxy to use the default certificate, or to reject the connection in case of strict-sni. - "ssl_version" is the SSL version of the frontend. - "ssl_ciphers" is the SSL cipher used for the connection. ``` #### 8.2.5. Error log format ``` When an incoming connection fails due to an SSL handshake or an invalid PROXY protocol header, HAProxy will log the event using a shorter, fixed line format, unless a dedicated error log format is defined through an "[error-log-format](#error-log-format)" line. By default, logs are emitted at the LOG_INFO level, unless the option "[log-separate-errors](#option%20log-separate-errors)" is set in the backend, in which case the LOG_ERR level will be used. Connections on which no data are exchanged (e.g. probes) are not logged if the "[dontlognull](#option%20dontlognull)" option is set. The default format looks like this : >>> Dec 3 18:27:14 localhost \ haproxy[6103]: 127.0.0.1:56059 [03/Dec/2012:17:35:10.380] frt/f1: \ Connection error during SSL handshake Field Format Extract from the example above 1 process_name '[' pid ']:' haproxy[6103]: 2 client_ip ':' client_port 127.0.0.1:56059 3 '[' accept_date ']' [03/Dec/2012:17:35:10.380] 4 frontend_name "/" bind_name ":" frt/f1: 5 message Connection error during SSL handshake These fields just provide minimal information to help debugging connection failures. By using the "[error-log-format](#error-log-format)" directive, the legacy log format described above will not be used anymore, and all error log lines will follow the defined format. An example of reasonably complete error-log-format follows, it will report the source address and port, the connection accept() date, the frontend name, the number of active connections on the process and on thit frontend, haproxy's internal error identifier on the front connection, the hexadecimal OpenSSL error number (that can be copy-pasted to "openssl errstr" for full decoding), the client certificate extraction status (0 indicates no error), the client certificate validation status using the CA (0 indicates no error), a boolean indicating if the connection is new or was resumed, the optional server name indication (SNI) provided by the client, the SSL version name and the SSL ciphers used on the connection, if any. Note that backend connection errors are never reported here since in order for a backend connection to fail, it would have passed through a successful stream, hence will be available as regular traffic log (see option httplog or option httpslog). # detailed frontend connection error log error-log-format "%ci:%cp [%tr] %ft %ac/%fc %[fc_err]/\ %[ssl_fc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_fc_is_resumed] \ %[ssl_fc_sni]/%sslv/%sslc" ``` #### 8.2.6. Custom log format ``` When the default log formats are not sufficient, it is possible to define new ones in very fine details. As creating a log-format from scratch is not always a trivial task, it is strongly recommended to first have a look at the existing formats ("[option tcplog](#option%20tcplog)", "[option httplog](#option%20httplog)", "[option httpslog](#option%20httpslog)"), pick the one looking the closest to the expectation, copy its "[log-format](#log-format)" equivalent string and adjust it. HAProxy understands some log format variables. % precedes log format variables. Variables can take arguments using braces ('{}'), and multiple arguments are separated by commas within the braces. Flags may be added or removed by prefixing them with a '+' or '-' sign. Special variable "%o" may be used to propagate its flags to all other variables on the same format string. This is particularly handy with quoted ("Q") and escaped ("E") string formats. If a variable is named between square brackets ('[' .. ']') then it is used as a sample expression rule (see [section 7.3](#7.3)). This it useful to add some less common information such as the client's SSL certificate's DN, or to log the key that would be used to store an entry into a stick table. Note: spaces must be escaped. In configuration directives "[log-format](#log-format)", "[log-format-sd](#log-format-sd)" and "[unique-id-format](#unique-id-format)", spaces are considered as delimiters and are merged. In order to emit a verbatim '%', it must be preceded by another '%' resulting in '%%'. Note: when using the RFC5424 syslog message format, the characters '"', '\' and ']' inside PARAM-VALUE should be escaped with '\' as prefix (see https://tools.ietf.org/html/rfc5424#section-6.3.3 for more details). In such cases, the use of the flag "E" should be considered. Flags are : * Q: quote a string * X: hexadecimal representation (IPs, Ports, %Ts, %rt, %pid) * E: escape characters '"', '\' and ']' in a string with '\' as prefix (intended purpose is for the RFC5424 structured-data log formats) ``` Example: ``` log-format %T\ %t\ Some\ Text log-format %{+Q}o\ %t\ %s\ %{-Q}r log-format-sd %{+Q,+E}o\ [exampleSDID@1234\ header=%[capture.req.hdr(0)]] ``` ``` Please refer to the table below for currently defined variables : +---+------+-----------------------------------------------+-------------+ | R | var | field name (8.2.2 and 8.2.3 for description) | type | +---+------+-----------------------------------------------+-------------+ | | %o | special variable, apply flags on all next var | | +---+------+-----------------------------------------------+-------------+ | | %B | bytes_read (from server to client) | numeric | | H | %CC | captured_request_cookie | string | | H | %CS | captured_response_cookie | string | | | %H | hostname | string | | H | %HM | HTTP method (ex: POST) | string | | H | %HP | HTTP request URI without query string | string | | H | %HPO | HTTP path only (without host nor query string)| string | | H | %HQ | HTTP request URI query string (ex: ?bar=baz) | string | | H | %HU | HTTP request URI (ex: /foo?bar=baz) | string | | H | %HV | HTTP version (ex: HTTP/1.0) | string | | | %ID | unique-id | string | | | %ST | status_code | numeric | | | %T | gmt_date_time | date | | H | %Ta | Active time of the request (from TR to end) | numeric | | | %Tc | Tc | numeric | | | %Td | Td = Tt - (Tq + Tw + Tc + Tr) | numeric | | | %Tl | local_date_time | date | | | %Th | connection handshake time (SSL, PROXY proto) | numeric | | H | %Ti | idle time before the HTTP request | numeric | | H | %Tq | Th + Ti + TR | numeric | | H | %TR | time to receive the full request from 1st byte| numeric | | H | %Tr | Tr (response time) | numeric | | | %Ts | timestamp | numeric | | | %Tt | Tt | numeric | | | %Tu | Tu | numeric | | | %Tw | Tw | numeric | | | %U | bytes_uploaded (from client to server) | numeric | | | %ac | actconn | numeric | | | %b | backend_name | string | | | %bc | beconn (backend concurrent connections) | numeric | | | %bi | backend_source_ip (connecting address) | IP | | | %bp | backend_source_port (connecting address) | numeric | | | %bq | backend_queue | numeric | | | %ci | client_ip (accepted address) | IP | | | %cp | client_port (accepted address) | numeric | | | %f | frontend_name | string | | | %fc | feconn (frontend concurrent connections) | numeric | | | %fi | frontend_ip (accepting address) | IP | | | %fp | frontend_port (accepting address) | numeric | | | %ft | frontend_name_transport ('~' suffix for SSL) | string | | | %lc | frontend_log_counter | numeric | | | %hr | captured_request_headers default style | string | | | %hrl | captured_request_headers CLF style | string list | | | %hs | captured_response_headers default style | string | | | %hsl | captured_response_headers CLF style | string list | | | %ms | accept date milliseconds (left-padded with 0) | numeric | | | %pid | PID | numeric | | H | %r | http_request | string | | | %rc | retries | numeric | | | %rt | request_counter (HTTP req or TCP session) | numeric | | | %s | server_name | string | | | %sc | srv_conn (server concurrent connections) | numeric | | | %si | server_IP (target address) | IP | | | %sp | server_port (target address) | numeric | | | %sq | srv_queue | numeric | | S | %sslc| ssl_ciphers (ex: AES-SHA) | string | | S | %sslv| ssl_version (ex: TLSv1) | string | | | %t | date_time (with millisecond resolution) | date | | H | %tr | date_time of HTTP request | date | | H | %trg | gmt_date_time of start of HTTP request | date | | H | %trl | local_date_time of start of HTTP request | date | | | %ts | termination_state | string | | H | %tsc | termination_state with cookie status | string | +---+------+-----------------------------------------------+-------------+ R = Restrictions : H = mode http only ; S = SSL only ``` ### 8.3. Advanced logging options ``` Some advanced logging options are often looked for but are not easy to find out just by looking at the various options. Here is an entry point for the few options which can enable better logging. Please refer to the keywords reference for more information about their usage. ``` #### 8.3.1. Disabling logging of external tests ``` It is quite common to have some monitoring tools perform health checks on HAProxy. Sometimes it will be a layer 3 load-balancer such as LVS or any commercial load-balancer, and sometimes it will simply be a more complete monitoring system such as Nagios. When the tests are very frequent, users often ask how to disable logging for those checks. There are three possibilities : - if connections come from everywhere and are just TCP probes, it is often desired to simply disable logging of connections without data exchange, by setting "[option dontlognull](#option%20dontlognull)" in the frontend. It also disables logging of port scans, which may or may not be desired. - it is possible to use the "http-request set-log-level silent" action using a variety of conditions (source networks, paths, user-agents, etc). - if the tests are performed on a known URI, use "[monitor-uri](#monitor-uri)" to declare this URI as dedicated to monitoring. Any host sending this request will only get the result of a health-check, and the request will not be logged. ``` #### 8.3.2. Logging before waiting for the session to terminate ``` The problem with logging at end of connection is that you have no clue about what is happening during very long sessions, such as remote terminal sessions or large file downloads. This problem can be worked around by specifying "[option logasap](#option%20logasap)" in the frontend. HAProxy will then log as soon as possible, just before data transfer begins. This means that in case of TCP, it will still log the connection status to the server, and in case of HTTP, it will log just after processing the server headers. In this case, the number of bytes reported is the number of header bytes sent to the client. In order to avoid confusion with normal logs, the total time field and the number of bytes are prefixed with a '+' sign which means that real numbers are certainly larger. ``` #### 8.3.3. Raising log level upon errors ``` Sometimes it is more convenient to separate normal traffic from errors logs, for instance in order to ease error monitoring from log files. When the option "[log-separate-errors](#option%20log-separate-errors)" is used, connections which experience errors, timeouts, retries, redispatches or HTTP status codes 5xx will see their syslog level raised from "info" to "err". This will help a syslog daemon store the log in a separate file. It is very important to keep the errors in the normal traffic file too, so that log ordering is not altered. You should also be careful if you already have configured your syslog daemon to store all logs higher than "notice" in an "admin" file, because the "err" level is higher than "notice". ``` #### 8.3.4. Disabling logging of successful connections ``` Although this may sound strange at first, some large sites have to deal with multiple thousands of logs per second and are experiencing difficulties keeping them intact for a long time or detecting errors within them. If the option "[dontlog-normal](#option%20dontlog-normal)" is set on the frontend, all normal connections will not be logged. In this regard, a normal connection is defined as one without any error, timeout, retry nor redispatch. In HTTP, the status code is checked too, and a response with a status 5xx is not considered normal and will be logged too. Of course, doing is is really discouraged as it will remove most of the useful information from the logs. Do this only if you have no other alternative. ``` ### 8.4. Timing events ``` Timers provide a great help in troubleshooting network problems. All values are reported in milliseconds (ms). These timers should be used in conjunction with the session termination flags. In TCP mode with "[option tcplog](#option%20tcplog)" set on the frontend, 3 control points are reported under the form "Tw/Tc/Tt", and in HTTP mode, 5 control points are reported under the form "TR/Tw/Tc/Tr/Ta". In addition, three other measures are provided, "Th", "Ti", and "Tq". Timings events in HTTP mode: first request 2nd request |<-------------------------------->|<-------------- ... t tr t tr ... ---|----|----|----|----|----|----|----|----|-- : Th Ti TR Tw Tc Tr Td : Ti ... :<---- Tq ---->: : :<-------------- Tt -------------->: :<-- -----Tu--------------->: :<--------- Ta --------->: Timings events in TCP mode: TCP session |<----------------->| t t ---|----|----|----|----|--- | Th Tw Tc Td | |<------ Tt ------->| - Th: total time to accept tcp connection and execute handshakes for low level protocols. Currently, these protocols are proxy-protocol and SSL. This may only happen once during the whole connection's lifetime. A large time here may indicate that the client only pre-established the connection without speaking, that it is experiencing network issues preventing it from completing a handshake in a reasonable time (e.g. MTU issues), or that an SSL handshake was very expensive to compute. Please note that this time is reported only before the first request, so it is safe to average it over all request to calculate the amortized value. The second and subsequent request will always report zero here. - Ti: is the idle time before the HTTP request (HTTP mode only). This timer counts between the end of the handshakes and the first byte of the HTTP request. When dealing with a second request in keep-alive mode, it starts to count after the end of the transmission the previous response. When a multiplexed protocol such as HTTP/2 is used, it starts to count immediately after the previous request. Some browsers pre-establish connections to a server in order to reduce the latency of a future request, and keep them pending until they need it. This delay will be reported as the idle time. A value of -1 indicates that nothing was received on the connection. - TR: total time to get the client request (HTTP mode only). It's the time elapsed between the first bytes received and the moment the proxy received the empty line marking the end of the HTTP headers. The value "-1" indicates that the end of headers has never been seen. This happens when the client closes prematurely or times out. This time is usually very short since most requests fit in a single packet. A large time may indicate a request typed by hand during a test. - Tq: total time to get the client request from the accept date or since the emission of the last byte of the previous response (HTTP mode only). It's exactly equal to Th + Ti + TR unless any of them is -1, in which case it returns -1 as well. This timer used to be very useful before the arrival of HTTP keep-alive and browsers' pre-connect feature. It's recommended to drop it in favor of TR nowadays, as the idle time adds a lot of noise to the reports. - Tw: total time spent in the queues waiting for a connection slot. It accounts for backend queue as well as the server queues, and depends on the queue size, and the time needed for the server to complete previous requests. The value "-1" means that the request was killed before reaching the queue, which is generally what happens with invalid or denied requests. - Tc: total time to establish the TCP connection to the server. It's the time elapsed between the moment the proxy sent the connection request, and the moment it was acknowledged by the server, or between the TCP SYN packet and the matching SYN/ACK packet in return. The value "-1" means that the connection never established. - Tr: server response time (HTTP mode only). It's the time elapsed between the moment the TCP connection was established to the server and the moment the server sent its complete response headers. It purely shows its request processing time, without the network overhead due to the data transmission. It is worth noting that when the client has data to send to the server, for instance during a POST request, the time already runs, and this can distort apparent response time. For this reason, it's generally wise not to trust too much this field for POST requests initiated from clients behind an untrusted network. A value of "-1" here means that the last the response header (empty line) was never seen, most likely because the server timeout stroke before the server managed to process the request. - Ta: total active time for the HTTP request, between the moment the proxy received the first byte of the request header and the emission of the last byte of the response body. The exception is when the "[logasap](#option%20logasap)" option is specified. In this case, it only equals (TR+Tw+Tc+Tr), and is prefixed with a '+' sign. From this field, we can deduce "Td", the data transmission time, by subtracting other timers when valid : Td = Ta - (TR + Tw + Tc + Tr) Timers with "-1" values have to be excluded from this equation. Note that "Ta" can never be negative. - Tt: total session duration time, between the moment the proxy accepted it and the moment both ends were closed. The exception is when the "[logasap](#option%20logasap)" option is specified. In this case, it only equals (Th+Ti+TR+Tw+Tc+Tr), and is prefixed with a '+' sign. From this field, we can deduce "Td", the data transmission time, by subtracting other timers when valid : Td = Tt - (Th + Ti + TR + Tw + Tc + Tr) Timers with "-1" values have to be excluded from this equation. In TCP mode, "Ti", "Tq" and "Tr" have to be excluded too. Note that "Tt" can never be negative and that for HTTP, Tt is simply equal to (Th+Ti+Ta). - Tu: total estimated time as seen from client, between the moment the proxy accepted it and the moment both ends were closed, without idle time. This is useful to roughly measure end-to-end time as a user would see it, without idle time pollution from keep-alive time between requests. This timer in only an estimation of time seen by user as it assumes network latency is the same in both directions. The exception is when the "[logasap](#option%20logasap)" option is specified. In this case, it only equals (Th+TR+Tw+Tc+Tr), and is prefixed with a '+' sign. These timers provide precious indications on trouble causes. Since the TCP protocol defines retransmit delays of 3, 6, 12... seconds, we know for sure that timers close to multiples of 3s are nearly always related to lost packets due to network problems (wires, negotiation, congestion). Moreover, if "Ta" or "Tt" is close to a timeout value specified in the configuration, it often means that a session has been aborted on timeout. Most common cases : - If "Th" or "Ti" are close to 3000, a packet has probably been lost between the client and the proxy. This is very rare on local networks but might happen when clients are on far remote networks and send large requests. It may happen that values larger than usual appear here without any network cause. Sometimes, during an attack or just after a resource starvation has ended, HAProxy may accept thousands of connections in a few milliseconds. The time spent accepting these connections will inevitably slightly delay processing of other connections, and it can happen that request times in the order of a few tens of milliseconds are measured after a few thousands of new connections have been accepted at once. Using one of the keep-alive modes may display larger idle times since "Ti" measures the time spent waiting for additional requests. - If "Tc" is close to 3000, a packet has probably been lost between the server and the proxy during the server connection phase. This value should always be very low, such as 1 ms on local networks and less than a few tens of ms on remote networks. - If "Tr" is nearly always lower than 3000 except some rare values which seem to be the average majored by 3000, there are probably some packets lost between the proxy and the server. - If "Ta" is large even for small byte counts, it generally is because neither the client nor the server decides to close the connection while HAProxy is running in tunnel mode and both have agreed on a keep-alive connection mode. In order to solve this issue, it will be needed to specify one of the HTTP options to manipulate keep-alive or close options on either the frontend or the backend. Having the smallest possible 'Ta' or 'Tt' is important when connection regulation is used with the "maxconn" option on the servers, since no new connection will be sent to the server until another one is released. Other noticeable HTTP log cases ('xx' means any value to be ignored) : TR/Tw/Tc/Tr/+Ta The "[option logasap](#option%20logasap)" is present on the frontend and the log was emitted before the data phase. All the timers are valid except "Ta" which is shorter than reality. -1/xx/xx/xx/Ta The client was not able to send a complete request in time or it aborted too early. Check the session termination flags then "[timeout http-request](#timeout%20http-request)" and "timeout client" settings. TR/-1/xx/xx/Ta It was not possible to process the request, maybe because servers were out of order, because the request was invalid or forbidden by ACL rules. Check the session termination flags. TR/Tw/-1/xx/Ta The connection could not establish on the server. Either it actively refused it or it timed out after Ta-(TR+Tw) ms. Check the session termination flags, then check the "timeout connect" setting. Note that the tarpit action might return similar-looking patterns, with "Tw" equal to the time the client connection was maintained open. TR/Tw/Tc/-1/Ta The server has accepted the connection but did not return a complete response in time, or it closed its connection unexpectedly after Ta-(TR+Tw+Tc) ms. Check the session termination flags, then check the "timeout server" setting. ``` ### 8.5. Session state at disconnection ``` TCP and HTTP logs provide a session termination indicator in the "termination_state" field, just before the number of active connections. It is 2-characters long in TCP mode, and is extended to 4 characters in HTTP mode, each of which has a special meaning : - On the first character, a code reporting the first event which caused the session to terminate : C : the TCP session was unexpectedly aborted by the client. S : the TCP session was unexpectedly aborted by the server, or the server explicitly refused it. P : the session was prematurely aborted by the proxy, because of a connection limit enforcement, because a DENY filter was matched, because of a security check which detected and blocked a dangerous error in server response which might have caused information leak (e.g. cacheable cookie). L : the session was locally processed by HAProxy and was not passed to a server. This is what happens for stats and redirects. R : a resource on the proxy has been exhausted (memory, sockets, source ports, ...). Usually, this appears during the connection phase, and system logs should contain a copy of the precise error. If this happens, it must be considered as a very serious anomaly which should be fixed as soon as possible by any means. I : an internal error was identified by the proxy during a self-check. This should NEVER happen, and you are encouraged to report any log containing this, because this would almost certainly be a bug. It would be wise to preventively restart the process after such an event too, in case it would be caused by memory corruption. D : the session was killed by HAProxy because the server was detected as down and was configured to kill all connections when going down. U : the session was killed by HAProxy on this backup server because an active server was detected as up and was configured to kill all backup connections when going up. K : the session was actively killed by an admin operating on HAProxy. c : the client-side timeout expired while waiting for the client to send or receive data. s : the server-side timeout expired while waiting for the server to send or receive data. - : normal session completion, both the client and the server closed with nothing left in the buffers. - on the second character, the TCP or HTTP session state when it was closed : R : the proxy was waiting for a complete, valid REQUEST from the client (HTTP mode only). Nothing was sent to any server. Q : the proxy was waiting in the QUEUE for a connection slot. This can only happen when servers have a 'maxconn' parameter set. It can also happen in the global queue after a redispatch consecutive to a failed attempt to connect to a dying server. If no redispatch is reported, then no connection attempt was made to any server. C : the proxy was waiting for the CONNECTION to establish on the server. The server might at most have noticed a connection attempt. H : the proxy was waiting for complete, valid response HEADERS from the server (HTTP only). D : the session was in the DATA phase. L : the proxy was still transmitting LAST data to the client while the server had already finished. This one is very rare as it can only happen when the client dies while receiving the last packets. T : the request was tarpitted. It has been held open with the client during the whole "[timeout tarpit](#timeout%20tarpit)" duration or until the client closed, both of which will be reported in the "Tw" timer. - : normal session completion after end of data transfer. - the third character tells whether the persistence cookie was provided by the client (only in HTTP mode) : N : the client provided NO cookie. This is usually the case for new visitors, so counting the number of occurrences of this flag in the logs generally indicate a valid trend for the site frequentation. I : the client provided an INVALID cookie matching no known server. This might be caused by a recent configuration change, mixed cookies between HTTP/HTTPS sites, persistence conditionally ignored, or an attack. D : the client provided a cookie designating a server which was DOWN, so either "[option persist](#option%20persist)" was used and the client was sent to this server, or it was not set and the client was redispatched to another server. V : the client provided a VALID cookie, and was sent to the associated server. E : the client provided a valid cookie, but with a last date which was older than what is allowed by the "maxidle" cookie parameter, so the cookie is consider EXPIRED and is ignored. The request will be redispatched just as if there was no cookie. O : the client provided a valid cookie, but with a first date which was older than what is allowed by the "maxlife" cookie parameter, so the cookie is consider too OLD and is ignored. The request will be redispatched just as if there was no cookie. U : a cookie was present but was not used to select the server because some other server selection mechanism was used instead (typically a "[use-server](#use-server)" rule). - : does not apply (no cookie set in configuration). - the last character reports what operations were performed on the persistence cookie returned by the server (only in HTTP mode) : N : NO cookie was provided by the server, and none was inserted either. I : no cookie was provided by the server, and the proxy INSERTED one. Note that in "cookie insert" mode, if the server provides a cookie, it will still be overwritten and reported as "I" here. U : the proxy UPDATED the last date in the cookie that was presented by the client. This can only happen in insert mode with "maxidle". It happens every time there is activity at a different date than the date indicated in the cookie. If any other change happens, such as a redispatch, then the cookie will be marked as inserted instead. P : a cookie was PROVIDED by the server and transmitted as-is. R : the cookie provided by the server was REWRITTEN by the proxy, which happens in "cookie rewrite" or "cookie prefix" modes. D : the cookie provided by the server was DELETED by the proxy. - : does not apply (no cookie set in configuration). The combination of the two first flags gives a lot of information about what was happening when the session terminated, and why it did terminate. It can be helpful to detect server saturation, network troubles, local system resource starvation, attacks, etc... The most common termination flags combinations are indicated below. They are alphabetically sorted, with the lowercase set just after the upper case for easier finding and understanding. Flags Reason -- Normal termination. CC The client aborted before the connection could be established to the server. This can happen when HAProxy tries to connect to a recently dead (or unchecked) server, and the client aborts while HAProxy is waiting for the server to respond or for "timeout connect" to expire. CD The client unexpectedly aborted during data transfer. This can be caused by a browser crash, by an intermediate equipment between the client and HAProxy which decided to actively break the connection, by network routing issues between the client and HAProxy, or by a keep-alive session between the server and the client terminated first by the client. cD The client did not send nor acknowledge any data for as long as the "timeout client" delay. This is often caused by network failures on the client side, or the client simply leaving the net uncleanly. CH The client aborted while waiting for the server to start responding. It might be the server taking too long to respond or the client clicking the 'Stop' button too fast. cH The "timeout client" stroke while waiting for client data during a POST request. This is sometimes caused by too large TCP MSS values for PPPoE networks which cannot transport full-sized packets. It can also happen when client timeout is smaller than server timeout and the server takes too long to respond. CQ The client aborted while its session was queued, waiting for a server with enough empty slots to accept it. It might be that either all the servers were saturated or that the assigned server was taking too long a time to respond. CR The client aborted before sending a full HTTP request. Most likely the request was typed by hand using a telnet client, and aborted too early. The HTTP status code is likely a 400 here. Sometimes this might also be caused by an IDS killing the connection between HAProxy and the client. "[option http-ignore-probes](#option%20http-ignore-probes)" can be used to ignore connections without any data transfer. cR The "[timeout http-request](#timeout%20http-request)" stroke before the client sent a full HTTP request. This is sometimes caused by too large TCP MSS values on the client side for PPPoE networks which cannot transport full-sized packets, or by clients sending requests by hand and not typing fast enough, or forgetting to enter the empty line at the end of the request. The HTTP status code is likely a 408 here. Note: recently, some browsers started to implement a "pre-connect" feature consisting in speculatively connecting to some recently visited web sites just in case the user would like to visit them. This results in many connections being established to web sites, which end up in 408 Request Timeout if the timeout strikes first, or 400 Bad Request when the browser decides to close them first. These ones pollute the log and feed the error counters. Some versions of some browsers have even been reported to display the error code. It is possible to work around the undesirable effects of this behavior by adding "option http-ignore-probes" in the frontend, resulting in connections with zero data transfer to be totally ignored. This will definitely hide the errors of people experiencing connectivity issues though. CT The client aborted while its session was tarpitted. It is important to check if this happens on valid requests, in order to be sure that no wrong tarpit rules have been written. If a lot of them happen, it might make sense to lower the "[timeout tarpit](#timeout%20tarpit)" value to something closer to the average reported "Tw" timer, in order not to consume resources for just a few attackers. LR The request was intercepted and locally handled by HAProxy. Generally it means that this was a redirect or a stats request. SC The server or an equipment between it and HAProxy explicitly refused the TCP connection (the proxy received a TCP RST or an ICMP message in return). Under some circumstances, it can also be the network stack telling the proxy that the server is unreachable (e.g. no route, or no ARP response on local network). When this happens in HTTP mode, the status code is likely a 502 or 503 here. sC The "timeout connect" stroke before a connection to the server could complete. When this happens in HTTP mode, the status code is likely a 503 or 504 here. SD The connection to the server died with an error during the data transfer. This usually means that HAProxy has received an RST from the server or an ICMP message from an intermediate equipment while exchanging data with the server. This can be caused by a server crash or by a network issue on an intermediate equipment. sD The server did not send nor acknowledge any data for as long as the "timeout server" setting during the data phase. This is often caused by too short timeouts on L4 equipment before the server (firewalls, load-balancers, ...), as well as keep-alive sessions maintained between the client and the server expiring first on HAProxy. SH The server aborted before sending its full HTTP response headers, or it crashed while processing the request. Since a server aborting at this moment is very rare, it would be wise to inspect its logs to control whether it crashed and why. The logged request may indicate a small set of faulty requests, demonstrating bugs in the application. Sometimes this might also be caused by an IDS killing the connection between HAProxy and the server. sH The "timeout server" stroke before the server could return its response headers. This is the most common anomaly, indicating too long transactions, probably caused by server or database saturation. The immediate workaround consists in increasing the "timeout server" setting, but it is important to keep in mind that the user experience will suffer from these long response times. The only long term solution is to fix the application. sQ The session spent too much time in queue and has been expired. See the "[timeout queue](#timeout%20queue)" and "timeout connect" settings to find out how to fix this if it happens too often. If it often happens massively in short periods, it may indicate general problems on the affected servers due to I/O or database congestion, or saturation caused by external attacks. PC The proxy refused to establish a connection to the server because the process's socket limit has been reached while attempting to connect. The global "maxconn" parameter may be increased in the configuration so that it does not happen anymore. This status is very rare and might happen when the global "[ulimit-n](#ulimit-n)" parameter is forced by hand. PD The proxy blocked an incorrectly formatted chunked encoded message in a request or a response, after the server has emitted its headers. In most cases, this will indicate an invalid message from the server to the client. HAProxy supports chunk sizes of up to 2GB - 1 (2147483647 bytes). Any larger size will be considered as an error. PH The proxy blocked the server's response, because it was invalid, incomplete, dangerous (cache control), or matched a security filter. In any case, an HTTP 502 error is sent to the client. One possible cause for this error is an invalid syntax in an HTTP header name containing unauthorized characters. It is also possible but quite rare, that the proxy blocked a chunked-encoding request from the client due to an invalid syntax, before the server responded. In this case, an HTTP 400 error is sent to the client and reported in the logs. Finally, it may be due to an HTTP header rewrite failure on the response. In this case, an HTTP 500 error is sent (see "[tune.maxrewrite](#tune.maxrewrite)" and "[http-response strict-mode](#http-response%20strict-mode)" for more inforomation). PR The proxy blocked the client's HTTP request, either because of an invalid HTTP syntax, in which case it returned an HTTP 400 error to the client, or because a deny filter matched, in which case it returned an HTTP 403 error. It may also be due to an HTTP header rewrite failure on the request. In this case, an HTTP 500 error is sent (see "[tune.maxrewrite](#tune.maxrewrite)" and "[http-request strict-mode](#http-request%20strict-mode)" for more inforomation). PT The proxy blocked the client's request and has tarpitted its connection before returning it a 500 server error. Nothing was sent to the server. The connection was maintained open for as long as reported by the "Tw" timer field. RC A local resource has been exhausted (memory, sockets, source ports) preventing the connection to the server from establishing. The error logs will tell precisely what was missing. This is very rare and can only be solved by proper system tuning. The combination of the two last flags gives a lot of information about how persistence was handled by the client, the server and by HAProxy. This is very important to troubleshoot disconnections, when users complain they have to re-authenticate. The commonly encountered flags are : -- Persistence cookie is not enabled. NN No cookie was provided by the client, none was inserted in the response. For instance, this can be in insert mode with "postonly" set on a GET request. II A cookie designating an invalid server was provided by the client, a valid one was inserted in the response. This typically happens when a "server" entry is removed from the configuration, since its cookie value can be presented by a client when no other server knows it. NI No cookie was provided by the client, one was inserted in the response. This typically happens for first requests from every user in "insert" mode, which makes it an easy way to count real users. VN A cookie was provided by the client, none was inserted in the response. This happens for most responses for which the client has already got a cookie. VU A cookie was provided by the client, with a last visit date which is not completely up-to-date, so an updated cookie was provided in response. This can also happen if there was no date at all, or if there was a date but the "maxidle" parameter was not set, so that the cookie can be switched to unlimited time. EI A cookie was provided by the client, with a last visit date which is too old for the "maxidle" parameter, so the cookie was ignored and a new cookie was inserted in the response. OI A cookie was provided by the client, with a first visit date which is too old for the "maxlife" parameter, so the cookie was ignored and a new cookie was inserted in the response. DI The server designated by the cookie was down, a new server was selected and a new cookie was emitted in the response. VI The server designated by the cookie was not marked dead but could not be reached. A redispatch happened and selected another one, which was then advertised in the response. ``` ### 8.6. Non-printable characters ``` In order not to cause trouble to log analysis tools or terminals during log consulting, non-printable characters are not sent as-is into log files, but are converted to the two-digits hexadecimal representation of their ASCII code, prefixed by the character '#'. The only characters that can be logged without being escaped are comprised between 32 and 126 (inclusive). Obviously, the escape character '#' itself is also encoded to avoid any ambiguity ("#23"). It is the same for the character '"' which becomes "#22", as well as '{', '|' and '}' when logging headers. Note that the space character (' ') is not encoded in headers, which can cause issues for tools relying on space count to locate fields. A typical header containing spaces is "User-Agent". Last, it has been observed that some syslog daemons such as syslog-ng escape the quote ('"') with a backslash ('\'). The reverse operation can safely be performed since no quote may appear anywhere else in the logs. ``` ### 8.7. Capturing HTTP cookies ``` Cookie capture simplifies the tracking a complete user session. This can be achieved using the "[capture cookie](#capture%20cookie)" statement in the frontend. Please refer to [section 4.2](#4.2) for more details. Only one cookie can be captured, and the same cookie will simultaneously be checked in the request ("Cookie:" header) and in the response ("Set-Cookie:" header). The respective values will be reported in the HTTP logs at the "captured_request_cookie" and "captured_response_cookie" locations (see [section 8.2.3](#8.2.3) about HTTP log format). When either cookie is not seen, a dash ('-') replaces the value. This way, it's easy to detect when a user switches to a new session for example, because the server will reassign it a new cookie. It is also possible to detect if a server unexpectedly sets a wrong cookie to a client, leading to session crossing. ``` Examples : ``` # capture the first cookie whose name starts with "ASPSESSION" capture cookie ASPSESSION len 32 # capture the first cookie whose name is exactly "vgnvisitor" capture cookie vgnvisitor= len 32 ``` ### 8.8. Capturing HTTP headers ``` Header captures are useful to track unique request identifiers set by an upper proxy, virtual host names, user-agents, POST content-length, referrers, etc. In the response, one can search for information about the response length, how the server asked the cache to behave, or an object location during a redirection. Header captures are performed using the "[capture request header](#capture%20request%20header)" and "capture response header" statements in the frontend. Please consult their definition in [section 4.2](#4.2) for more details. It is possible to include both request headers and response headers at the same time. Non-existent headers are logged as empty strings, and if one header appears more than once, only its last occurrence will be logged. Request headers are grouped within braces '{' and '}' in the same order as they were declared, and delimited with a vertical bar '|' without any space. Response headers follow the same representation, but are displayed after a space following the request headers block. These blocks are displayed just before the HTTP request in the logs. As a special case, it is possible to specify an HTTP header capture in a TCP frontend. The purpose is to enable logging of headers which will be parsed in an HTTP backend if the request is then switched to this HTTP backend. ``` Example : ``` # This instance chains to the outgoing proxy listen proxy-out mode http option httplog option logasap log global server cache1 192.168.1.1:3128 # log the name of the virtual server capture request header Host len 20 # log the amount of data uploaded during a POST capture request header Content-Length len 10 # log the beginning of the referrer capture request header Referer len 20 # server name (useful for outgoing proxies only) capture response header Server len 20 # logging the content-length is useful with "[option logasap](#option%20logasap)" capture response header Content-Length len 10 # log the expected cache behavior on the response capture response header Cache-Control len 8 # the Via header will report the next proxy's name capture response header Via len 20 # log the URL location during a redirection capture response header Location len 20 >>> Aug 9 20:26:09 localhost \ haproxy[2022]: 127.0.0.1:34014 [09/Aug/2004:20:26:09] proxy-out \ proxy-out/cache1 0/0/0/162/+162 200 +350 - - ---- 0/0/0/0/0 0/0 \ {fr.adserver.yahoo.co||http://fr.f416.mail.} {|864|private||} \ "GET http://fr.adserver.yahoo.com/" >>> Aug 9 20:30:46 localhost \ haproxy[2022]: 127.0.0.1:34020 [09/Aug/2004:20:30:46] proxy-out \ proxy-out/cache1 0/0/0/182/+182 200 +279 - - ---- 0/0/0/0/0 0/0 \ {w.ods.org||} {Formilux/0.1.8|3495|||} \ "GET http://trafic.1wt.eu/ HTTP/1.1" >>> Aug 9 20:30:46 localhost \ haproxy[2022]: 127.0.0.1:34028 [09/Aug/2004:20:30:46] proxy-out \ proxy-out/cache1 0/0/2/126/+128 301 +223 - - ---- 0/0/0/0/0 0/0 \ {www.sytadin.equipement.gouv.fr||http://trafic.1wt.eu/} \ {Apache|230|||http://www.sytadin.} \ "GET http://www.sytadin.equipement.gouv.fr/ HTTP/1.1" ``` ### 8.9. Examples of logs ``` These are real-world examples of logs accompanied with an explanation. Some of them have been made up by hand. The syslog part has been removed for better reading. Their sole purpose is to explain how to decipher them. >>> haproxy[674]: 127.0.0.1:33318 [15/Oct/2003:08:31:57.130] px-http \ px-http/srv1 6559/0/7/147/6723 200 243 - - ---- 5/3/3/1/0 0/0 \ "HEAD / HTTP/1.0" => long request (6.5s) entered by hand through 'telnet'. The server replied in 147 ms, and the session ended normally ('----') >>> haproxy[674]: 127.0.0.1:33319 [15/Oct/2003:08:31:57.149] px-http \ px-http/srv1 6559/1230/7/147/6870 200 243 - - ---- 324/239/239/99/0 \ 0/9 "HEAD / HTTP/1.0" => Idem, but the request was queued in the global queue behind 9 other requests, and waited there for 1230 ms. >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.654] px-http \ px-http/srv1 9/0/7/14/+30 200 +243 - - ---- 3/3/3/1/0 0/0 \ "GET /image.iso HTTP/1.0" => request for a long data transfer. The "[logasap](#option%20logasap)" option was specified, so the log was produced just before transferring data. The server replied in 14 ms, 243 bytes of headers were sent to the client, and total time from accept to first data byte is 30 ms. >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.925] px-http \ px-http/srv1 9/0/7/14/30 502 243 - - PH-- 3/2/2/0/0 0/0 \ "GET /cgi-bin/bug.cgi? HTTP/1.0" => the proxy blocked a server response either because of an "http-response deny" rule, or because the response was improperly formatted and not HTTP-compliant, or because it blocked sensitive information which risked being cached. In this case, the response is replaced with a "502 bad gateway". The flags ("PH--") tell us that it was HAProxy who decided to return the 502 and not the server. >>> haproxy[18113]: 127.0.0.1:34548 [15/Oct/2003:15:18:55.798] px-http \ px-http/<NOSRV> -1/-1/-1/-1/8490 -1 0 - - CR-- 2/2/2/0/0 0/0 "" => the client never completed its request and aborted itself ("C---") after 8.5s, while the proxy was waiting for the request headers ("-R--"). Nothing was sent to any server. >>> haproxy[18113]: 127.0.0.1:34549 [15/Oct/2003:15:19:06.103] px-http \ px-http/<NOSRV> -1/-1/-1/-1/50001 408 0 - - cR-- 2/2/2/0/0 0/0 "" => The client never completed its request, which was aborted by the time-out ("c---") after 50s, while the proxy was waiting for the request headers ("-R--"). Nothing was sent to any server, but the proxy could send a 408 return code to the client. >>> haproxy[18989]: 127.0.0.1:34550 [15/Oct/2003:15:24:28.312] px-tcp \ px-tcp/srv1 0/0/5007 0 cD 0/0/0/0/0 0/0 => This log was produced with "[option tcplog](#option%20tcplog)". The client timed out after 5 seconds ("c----"). >>> haproxy[18989]: 10.0.0.1:34552 [15/Oct/2003:15:26:31.462] px-http \ px-http/srv1 3183/-1/-1/-1/11215 503 0 - - SC-- 205/202/202/115/3 \ 0/0 "HEAD / HTTP/1.0" => The request took 3s to complete (probably a network problem), and the connection to the server failed ('SC--') after 4 attempts of 2 seconds (config says 'retries 3'), and no redispatch (otherwise we would have seen "/+3"). Status code 503 was returned to the client. There were 115 connections on this server, 202 connections on this proxy, and 205 on the global process. It is possible that the server refused the connection because of too many already established. ``` 9. Supported filters --------------------- ``` Here are listed officially supported filters with the list of parameters they accept. Depending on compile options, some of these filters might be unavailable. The list of available filters is reported in haproxy -vv. ``` **See also :** "filter" ### 9.1. Trace **filter trace** [name <name>] [random-forwarding] [hexdump] Arguments: ``` <name> is an arbitrary name that will be reported in messages. If no name is provided, "TRACE" is used. <quiet> inhibits trace messages. <random-forwarding> enables the random forwarding of parsed data. By default, this filter forwards all previously parsed data. With this parameter, it only forwards a random amount of the parsed data. <hexdump> dumps all forwarded data to the server and the client. ``` ``` This filter can be used as a base to develop new filters. It defines all callbacks and print a message on the standard error stream (stderr) with useful information for all of them. It may be useful to debug the activity of other filters or, quite simply, HAProxy's activity. Using <random-parsing> and/or <random-forwarding> parameters is a good way to tests the behavior of a filter that parses data exchanged between a client and a server by adding some latencies in the processing. ``` ### 9.2. HTTP compression **filter compression** ``` The HTTP compression has been moved in a filter in HAProxy 1.7. "[compression](#compression)" keyword must still be used to enable and configure the HTTP compression. And when no other filter is used, it is enough. When used with the cache or the fcgi-app enabled, it is also enough. In this case, the compression is always done after the response is stored in the cache. But it is mandatory to explicitly use a filter line to enable the HTTP compression when at least one filter other than the cache or the fcgi-app is used for the same listener/frontend/backend. This is important to know the filters evaluation order. ``` **See also :** "[compression](#compression)", [section 9.4](#9.4) about the cache filter and [section 9.5](#9.5) about the fcgi-app filter. ### 9.3. Stream Processing Offload Engine (SPOE) **filter spoe** [engine <name>] config <file> Arguments : ``` <name> is the engine name that will be used to find the right scope in the configuration file. If not provided, all the file will be parsed. <file> is the path of the engine configuration file. This file can contain configuration of several engines. In this case, each part must be placed in its own scope. ``` ``` The Stream Processing Offload Engine (SPOE) is a filter communicating with external components. It allows the offload of some specifics processing on the streams in tiered applications. These external components and information exchanged with them are configured in dedicated files, for the main part. It also requires dedicated backends, defined in HAProxy configuration. SPOE communicates with external components using an in-house binary protocol, the Stream Processing Offload Protocol (SPOP). For all information about the SPOE configuration and the SPOP specification, see "doc/SPOE.txt". ``` ### 9.4. Cache **filter cache** <name> Arguments : ``` <name> is name of the cache section this filter will use. ``` ``` The cache uses a filter to store cacheable responses. The HTTP rules "cache-store" and "cache-use" must be used to define how and when to use a cache. By default the corresponding filter is implicitly defined. And when no other filters than fcgi-app or compression are used, it is enough. In such case, the compression filter is always evaluated after the cache filter. But it is mandatory to explicitly use a filter line to use a cache when at least one filter other than the compression or the fcgi-app is used for the same listener/frontend/backend. This is important to know the filters evaluation order. ``` **See also :** [section 9.2](#9.2) about the compression filter, [section 9.5](#9.5) about the fcgi-app filter and [section 6](#6) about cache. ### 9.5. Fcgi-app **filter fcgi-app** <name> Arguments : ``` <name> is name of the fcgi-app section this filter will use. ``` ``` The FastCGI application uses a filter to evaluate all custom parameters on the request path, and to process the headers on the response path. the <name> must reference an existing fcgi-app section. The directive "use-fcgi-app" should be used to define the application to use. By default the corresponding filter is implicitly defined. And when no other filters than cache or compression are used, it is enough. But it is mandatory to explicitly use a filter line to a fcgi-app when at least one filter other than the compression or the cache is used for the same backend. This is important to know the filters evaluation order. ``` **See also:** "use-fcgi-app", [section 9.2](#9.2) about the compression filter, [section 9.4](#9.4) about the cache filter and [section 10](#10) about FastCGI application. ### 9.6. OpenTracing ``` The OpenTracing filter adds native support for using distributed tracing in HAProxy. This is enabled by sending an OpenTracing compliant request to one of the supported tracers such as Datadog, Jaeger, Lightstep and Zipkin tracers. Please note: tracers are not listed by any preference, but alphabetically. This feature is only enabled when HAProxy was built with USE_OT=1. The OpenTracing filter activation is done explicitly by specifying it in the HAProxy configuration. If this is not done, the OpenTracing filter in no way participates in the work of HAProxy. ``` **filter opentracing** [id <id>] config <file> Arguments : ``` <id> is the OpenTracing filter id that will be used to find the right scope in the configuration file. If no filter id is specified, 'ot-filter' is used as default. If scope is not specified in the configuration file, it applies to all defined OpenTracing filters. <file> is the path of the OpenTracing configuration file. The same file can contain configurations for multiple OpenTracing filters simultaneously. In that case we do not need to define scope so the same configuration applies to all filters or each filter must have its own scope defined. ``` ``` More detailed documentation related to the operation, configuration and use of the filter can be found in the addons/ot directory. ``` ### 9.7. Bandwidth limitation **filter bwlim-in** <name> default-limit <size> default-period <time> [min-size <sz>] **filter bwlim-out** <name> default-limit <size> default-period <time> [min-size <sz>] **filter bwlim-in** <name> limit <size> key <pattern> [table <table>] [min-size <sz>] **filter bwlim-out** <name> limit <size> key <pattern> [table <table>] [min-size <sz>] Arguments : ``` <name> is the filter name that will be used by 'set-bandwidth-limit' actions to reference a specific bandwidth limitation filter. <size> is max number of bytes that can be forwarded over the period. The value must be specified for per-stream and shared bandwidth limitation filters. It follows the HAProxy size format and is expressed in bytes. <pattern> is a sample expression rule as described in [section 7.3](#7.3). It describes what elements will be analyzed, extracted, combined, and used to select which table entry to update the counters. It must be specified for shared bandwidth limitation filters only. <table> is an optional table to be used instead of the default one, which is the stick-table declared in the current proxy. It can be specified for shared bandwidth limitation filters only. <time> is the default time period used to evaluate the bandwidth limitation rate. It can be specified for per-stream bandwidth limitation filters only. It follows the HAProxy time format and is expressed in milliseconds. <min-size> is the optional minimum number of bytes forwarded at a time by a stream excluding the last packet that may be smaller. This value can be specified for per-stream and shared bandwidth limitation filters. It follows the HAProxy size format and is expressed in bytes. ``` ``` Bandwidth limitation filters should be used to restrict the data forwarding speed at the stream level. By extension, such filters limit the network bandwidth consumed by a resource. Several bandwidth limitation filters can be used. For instance, it is possible to define a limit per source address to be sure a client will never consume all the network bandwidth, thereby penalizing other clients, and another one per stream to be able to fairly handle several connections for a given client. The definition order of these filters is important. If several bandwidth filters are enabled on a stream, the filtering will be applied in their definition order. It is also important to understand the definition order of the other filters have an influence. For instance, depending on the HTTP compression filter is defined before or after a bandwidth limitation filter, the limit will be applied on the compressed payload or not. The same is true for the cache filter. There are two kinds of bandwidth limitation filters. The first one enforces a default limit and is applied per stream. The second one uses a stickiness table to enforce a limit equally divided between all streams sharing the same entry in the table. In addition, for a given filter, depending on the filter keyword used, the limitation can be applied on incoming data, received from the client and forwarded to a server, or on outgoing data, received from a server and sent to the client. To apply a limit on incoming data, "bwlim-in" keyword must be used. To apply it on outgoing data, "bwlim-out" keyword must be used. In both cases, the bandwidth limitation is applied on forwarded data, at the stream level. The bandwidth limitation is applied at the stream level and not at the connection level. For multiplexed protocols (H2, H3 and FastCGI), the streams of the same connection may have different limits. For a per-stream bandwidth limitation filter, default period and limit must be defined. As their names suggest, they are the default values used to setup the bandwidth limitation rate for a stream. However, for this kind of filter and only this one, it is possible to redefine these values using sample expressions when the filter is enabled with a TCP/HTTP "set-bandwidth-limit" action. For a shared bandwidth limitation filter, depending on whether it is applied on incoming or outgoing data, the stickiness table used must store the corresponding bytes rate information. "bytes_in_rate(<period>)" counter must be stored to limit incoming data and "bytes_out_rate(<period>)" counter must be used to limit outgoing data. Finally, it is possible to set the minimum number of bytes that a bandwidth limitation filter can forward at a time for a given stream. It should be used to not forward too small amount of data, to reduce the CPU usage. It must carefully be defined. Too small, a value can increase the CPU usage. Too high, it can increase the latency. It is also highly linked to the defined bandwidth limit. If it is too close to the bandwidth limit, some pauses may be experienced to not exceed the limit because too many bytes will be consumed at a time. It is highly dependent on the filter configuration. A good idea is to start with something around 2 TCP MSS, typically 2896 bytes, and tune it after some experimentations. ``` Example: ``` frontend http bind *:80 mode http # If this filter is enabled, the stream will share the download limit # of 10m/s with all other streams with the same source address. filter bwlim-out limit-by-src key src table limit-by-src limit 10m # If this filter is enabled, the stream will be limited to download at 1m/s, # independently of all other streams. filter bwlim-out limit-by-strm default-limit 1m default-period 1s # Limit all streams to 1m/s (the default limit) and those accessing the # internal API to 100k/s. Limit each source address to 10m/s. The shared # limit is applied first. Both are limiting the download rate. http-request set-bandwidth-limit limit-by-strm http-request set-bandwidth-limit limit-by-strm limit 100k if { path_beg /internal } http-request set-bandwidth-limit limit-by-src ... backend limit-by-src # The stickiness table used by <limit-by-src> filter stick-table type ip size 1m expire 3600s store bytes_out_rate(1s) ``` **See also :** "[tcp-request content set-bandwidth-limit](#tcp-request%20content%20set-bandwidth-limit)", "[tcp-response content set-bandwidth-limit](#tcp-response%20content%20set-bandwidth-limit)", "[http-request set-bandwidth-limit](#http-request%20set-bandwidth-limit)" and "[http-response set-bandwidth-limit](#http-response%20set-bandwidth-limit)". 10. FastCGI applications ------------------------- ``` HAProxy is able to send HTTP requests to Responder FastCGI applications. This feature was added in HAProxy 2.1. To do so, servers must be configured to use the FastCGI protocol (using the keyword "proto fcgi" on the server line) and a FastCGI application must be configured and used by the backend managing these servers (using the keyword "use-fcgi-app" into the proxy section). Several FastCGI applications may be defined, but only one can be used at a time by a backend. HAProxy implements all features of the FastCGI specification for Responder application. Especially it is able to multiplex several requests on a simple connection. ``` ### 10.1. Setup #### 10.1.1. Fcgi-app section **fcgi-app** <name> ``` Declare a FastCGI application named <name>. To be valid, at least the document root must be defined. ``` **acl** <aclname> <criterion> [flags] [operator] <value> ... ``` Declare or complete an access list. See "acl" keyword in [section 4.2](#4.2) and [section 7](#7) about ACL usage for details. ACLs defined for a FastCGI application are private. They cannot be used by any other application or by any proxy. In the same way, ACLs defined in any other section are not usable by a FastCGI application. However, Pre-defined ACLs are available. ``` **docroot** <path> Define the document root on the remote host. <path> will be used to build the default value of FastCGI parameters SCRIPT\_FILENAME and PATH\_TRANSLATED. It is a mandatory setting. **index** <script-name> ``` Define the script name that will be appended after an URI that ends with a slash ("/") to set the default value of the FastCGI parameter SCRIPT_NAME. It is an optional setting. ``` Example : ``` index index.php ``` **log-stderr global** **log-stderr** <address> [len <length>] [format <format>] [sample <ranges>:<sample\_size>] <facility> [<level> [<minlevel>]] ``` Enable logging of STDERR messages reported by the FastCGI application. See "log" keyword in [section 4.2](#4.2) for details. It is an optional setting. By default STDERR messages are ignored. ``` **pass-header** <name> [ { if | unless } <condition> ] ``` Specify the name of a request header which will be passed to the FastCGI application. It may optionally be followed by an ACL-based condition, in which case it will only be evaluated if the condition is true. Most request headers are already available to the FastCGI application, prefixed with "HTTP_". Thus, this directive is only required to pass headers that are purposefully omitted. Currently, the headers "Authorization", "Proxy-Authorization" and hop-by-hop headers are omitted. Note that the headers "Content-type" and "Content-length" are never passed to the FastCGI application because they are already converted into parameters. ``` **path-info** <regex> ``` Define a regular expression to extract the script-name and the path-info from the URL-decoded path. Thus, <regex> may have two captures: the first one to capture the script name and the second one to capture the path-info. The first one is mandatory, the second one is optional. This way, it is possible to extract the script-name from the path ignoring the path-info. It is an optional setting. If it is not defined, no matching is performed on the path. and the FastCGI parameters PATH_INFO and PATH_TRANSLATED are not filled. For security reason, when this regular expression is defined, the newline and the null characters are forbidden from the path, once URL-decoded. The reason to such limitation is because otherwise the matching always fails (due to a limitation one the way regular expression are executed in HAProxy). So if one of these two characters is found in the URL-decoded path, an error is returned to the client. The principle of least astonishment is applied here. ``` Example : ``` path-info ^(/.+\.php)(/.*)?$ # both script-name and path-info may be set path-info ^(/.+\.php) # the path-info is ignored ``` **option get-values** **no option get-values** ``` Enable or disable the retrieve of variables about connection management. HAProxy is able to send the record FCGI_GET_VALUES on connection establishment to retrieve the value for following variables: * FCGI_MAX_REQS The maximum number of concurrent requests this application will accept. * FCGI_MPXS_CONNS "0" if this application does not multiplex connections, "1" otherwise. Some FastCGI applications does not support this feature. Some others close the connection immediately after sending their response. So, by default, this option is disabled. Note that the maximum number of concurrent requests accepted by a FastCGI application is a connection variable. It only limits the number of streams per connection. If the global load must be limited on the application, the server parameters "maxconn" and "[pool-max-conn](#pool-max-conn)" must be set. In addition, if an application does not support connection multiplexing, the maximum number of concurrent requests is automatically set to 1. ``` **option keep-conn** **no option keep-conn** ``` Instruct the FastCGI application to keep the connection open or not after sending a response. If disabled, the FastCGI application closes the connection after responding to this request. By default, this option is enabled. ``` **option max-reqs** <reqs> ``` Define the maximum number of concurrent requests this application will accept. This option may be overwritten if the variable FCGI_MAX_REQS is retrieved during connection establishment. Furthermore, if the application does not support connection multiplexing, this option will be ignored. By default set to 1. ``` **option mpxs-conns** **no option mpxs-conns** ``` Enable or disable the support of connection multiplexing. This option may be overwritten if the variable FCGI_MPXS_CONNS is retrieved during connection establishment. It is disabled by default. ``` **set-param** <name> <fmt> [ { if | unless } <condition> ] ``` Set a FastCGI parameter that should be passed to this application. Its value, defined by <fmt> must follows the log-format rules (see [section 8.2.4](#8.2.4) "Custom Log format"). It may optionally be followed by an ACL-based condition, in which case it will only be evaluated if the condition is true. With this directive, it is possible to overwrite the value of default FastCGI parameters. If the value is evaluated to an empty string, the rule is ignored. These directives are evaluated in their declaration order. ``` Example : ``` # PHP only, required if PHP was built with --enable-force-cgi-redirect set-param REDIRECT_STATUS 200 set-param PHP_AUTH_DIGEST %[req.hdr(Authorization)] ``` #### 10.1.2. Proxy section **use-fcgi-app** <name> ``` Define the FastCGI application to use for the backend. ``` Arguments : ``` <name> is the name of the FastCGI application to use. ``` ``` This keyword is only available for HTTP proxies with the backend capability and with at least one FastCGI server. However, FastCGI servers can be mixed with HTTP servers. But except there is a good reason to do so, it is not recommended (see [section 10.3](#10.3) about the limitations for details). Only one application may be defined at a time per backend. Note that, once a FastCGI application is referenced for a backend, depending on the configuration some processing may be done even if the request is not sent to a FastCGI server. Rules to set parameters or pass headers to an application are evaluated. ``` #### 10.1.3. Example ``` frontend front-http mode http bind *:80 bind *: use_backend back-dynamic if { path_reg ^/.+\.php(/.*)?$ } default_backend back-static backend back-static mode http server www A.B.C.D:80 backend back-dynamic mode http use-fcgi-app php-fpm server php-fpm A.B.C.D:9000 proto fcgi fcgi-app php-fpm log-stderr global option keep-conn docroot /var/www/my-app index index.php path-info ^(/.+\.php)(/.*)?$ ``` ### 10.2. Default parameters ``` A Responder FastCGI application has the same purpose as a CGI/1.1 program. In the CGI/1.1 specification (RFC3875), several variables must be passed to the script. So HAProxy set them and some others commonly used by FastCGI applications. All these variables may be overwritten, with caution though. +-------------------+-----------------------------------------------------+ | AUTH_TYPE | Identifies the mechanism, if any, used by HAProxy | | | to authenticate the user. Concretely, only the | | | BASIC authentication mechanism is supported. | | | | +-------------------+-----------------------------------------------------+ | CONTENT_LENGTH | Contains the size of the message-body attached to | | | the request. It means only requests with a known | | | size are considered as valid and sent to the | | | application. | | | | +-------------------+-----------------------------------------------------+ | CONTENT_TYPE | Contains the type of the message-body attached to | | | the request. It may not be set. | | | | +-------------------+-----------------------------------------------------+ | DOCUMENT_ROOT | Contains the document root on the remote host under | | | which the script should be executed, as defined in | | | the application's configuration. | | | | +-------------------+-----------------------------------------------------+ | GATEWAY_INTERFACE | Contains the dialect of CGI being used by HAProxy | | | to communicate with the FastCGI application. | | | Concretely, it is set to "CGI/1.1". | | | | +-------------------+-----------------------------------------------------+ | PATH_INFO | Contains the portion of the URI path hierarchy | | | following the part that identifies the script | | | itself. To be set, the directive "[path-info](#path-info)" must | | | be defined. | | | | +-------------------+-----------------------------------------------------+ | PATH_TRANSLATED | If PATH_INFO is set, it is its translated version. | | | It is the concatenation of DOCUMENT_ROOT and | | | PATH_INFO. If PATH_INFO is not set, this parameters | | | is not set too. | | | | +-------------------+-----------------------------------------------------+ | QUERY_STRING | Contains the request's query string. It may not be | | | set. | | | | +-------------------+-----------------------------------------------------+ | REMOTE_ADDR | Contains the network address of the client sending | | | the request. | | | | +-------------------+-----------------------------------------------------+ | REMOTE_USER | Contains the user identification string supplied by | | | client as part of user authentication. | | | | +-------------------+-----------------------------------------------------+ | REQUEST_METHOD | Contains the method which should be used by the | | | script to process the request. | | | | +-------------------+-----------------------------------------------------+ | REQUEST_URI | Contains the request's URI. | | | | +-------------------+-----------------------------------------------------+ | SCRIPT_FILENAME | Contains the absolute pathname of the script. it is | | | the concatenation of DOCUMENT_ROOT and SCRIPT_NAME. | | | | +-------------------+-----------------------------------------------------+ | SCRIPT_NAME | Contains the name of the script. If the directive | | | "[path-info](#path-info)" is defined, it is the first part of the | | | URI path hierarchy, ending with the script name. | | | Otherwise, it is the entire URI path. | | | | +-------------------+-----------------------------------------------------+ | SERVER_NAME | Contains the name of the server host to which the | | | client request is directed. It is the value of the | | | header "Host", if defined. Otherwise, the | | | destination address of the connection on the client | | | side. | | | | +-------------------+-----------------------------------------------------+ | SERVER_PORT | Contains the destination TCP port of the connection | | | on the client side, which is the port the client | | | connected to. | | | | +-------------------+-----------------------------------------------------+ | SERVER_PROTOCOL | Contains the request's protocol. | | | | +-------------------+-----------------------------------------------------+ | SERVER_SOFTWARE | Contains the string "HAProxy" followed by the | | | current HAProxy version. | | | | +-------------------+-----------------------------------------------------+ | HTTPS | Set to a non-empty value ("on") if the script was | | | queried through the HTTPS protocol. | | | | +-------------------+-----------------------------------------------------+ ``` ### 10.3. Limitations ``` The current implementation have some limitations. The first one is about the way some request headers are hidden to the FastCGI applications. This happens during the headers analysis, on the backend side, before the connection establishment. At this stage, HAProxy know the backend is using a FastCGI application but it don't know if the request will be routed to a FastCGI server or not. But to hide request headers, it simply removes them from the HTX message. So, if the request is finally routed to an HTTP server, it never see these headers. For this reason, it is not recommended to mix FastCGI servers and HTTP servers under the same backend. Similarly, the rules "[set-param](#set-param)" and "[pass-header](#pass-header)" are evaluated during the request headers analysis. So the evaluation is always performed, even if the requests is finally forwarded to an HTTP server. About the rules "[set-param](#set-param)", when a rule is applied, a pseudo header is added into the HTX message. So, the same way than for HTTP header rewrites, it may fail if the buffer is full. The rules "[set-param](#set-param)" will compete with "http-request" ones. Finally, all FastCGI params and HTTP headers are sent into a unique record FCGI_PARAM. Encoding of this record must be done in one pass, otherwise a processing error is returned. It means the record FCGI_PARAM, once encoded, must not exceeds the size of a buffer. However, there is no reserve to respect here. ``` 11. Address formats -------------------- ``` Several statements as "bind, "server", "[nameserver](#nameserver)" and "log" requires an address. This address can be a host name, an IPv4 address, an IPv6 address, or '*'. The '*' is equal to the special address "0.0.0.0" and can be used, in the case of "bind" or "[dgram-bind](#dgram-bind)" to listen on all IPv4 of the system.The IPv6 equivalent is '::'. Depending of the statement, a port or port range follows the IP address. This is mandatory on 'bind' statement, optional on 'server'. This address can also begin with a slash '/'. It is considered as the "unix" family, and '/' and following characters must be present the path. Default socket type or transport method "datagram" or "stream" depends on the configuration statement showing the address. Indeed, 'bind' and 'server' will use a "stream" socket type by default whereas 'log', 'nameserver' or 'dgram-bind' will use a "datagram". Optionally, a prefix could be used to force the address family and/or the socket type and the transport method. ``` 11. 1 Address family prefixes ------------------------------ ``` 'abns@<name>' following <name> is an abstract namespace (Linux only). 'fd@<n>' following address is a file descriptor <n> inherited from the parent. The fd must be bound and may or may not already be listening. 'ip@<address>[:port1[-port2]]' following <address> is considered as an IPv4 or IPv6 address depending on the syntax. Depending on the statement using this address, a port or a port range may or must be specified. 'ipv4@<address>[:port1[-port2]]' following <address> is always considered as an IPv4 address. Depending on the statement using this address, a port or a port range may or must be specified. 'ipv6@<address>[:port1[-port2]]' following <address> is always considered as an IPv6 address. Depending on the statement using this address, a port or a port range may or must be specified. 'sockpair@<n>' following address is the file descriptor of a connected unix socket or of a socketpair. During a connection, the initiator creates a pair of connected sockets, and passes one of them over the FD to the other end. The listener waits to receive the FD from the unix socket and uses it as if it were the FD of an accept(). Should be used carefully. 'unix@<path>' following string is considered as a UNIX socket <path>. this prefix is useful to declare an UNIX socket path which don't start by slash '/'. ``` 11. 2 Socket type prefixes --------------------------- ``` Previous "Address family prefixes" can also be prefixed to force the socket type and the transport method. The default depends of the statement using this address but in some cases the user may force it to a different one. This is the case for "log" statement where the default is syslog over UDP but we could force to use syslog over TCP. Those prefixes were designed for internal purpose and users should instead use aliases of the next section "11.5.3 Protocol prefixes". If users need one those prefixes to perform what they expect because they can not configure the same using the protocol prefixes, they should report this to the maintainers. 'stream+<family>@<address>' forces socket type and transport method to "stream" 'dgram+<family>@<address>' forces socket type and transport method to "datagram". ``` 11. 3 Protocol prefixes ------------------------ ``` 'tcp@<address>[:port1[-port2]]' following <address> is considered as an IPv4 or IPv6 address depending of the syntax but socket type and transport method is forced to "stream". Depending on the statement using this address, a port or a port range can or must be specified. It is considered as an alias of 'stream+ip@'. 'tcp4@<address>[:port1[-port2]]' following <address> is always considered as an IPv4 address but socket type and transport method is forced to "stream". Depending on the statement using this address, a port or port range can or must be specified. It is considered as an alias of 'stream+ipv4@'. 'tcp6@<address>[:port1[-port2]]' following <address> is always considered as an IPv6 address but socket type and transport method is forced to "stream". Depending on the statement using this address, a port or port range can or must be specified. It is considered as an alias of 'stream+ipv4@'. 'udp@<address>[:port1[-port2]]' following <address> is considered as an IPv4 or IPv6 address depending of the syntax but socket type and transport method is forced to "datagram". Depending on the statement using this address, a port or a port range can or must be specified. It is considered as an alias of 'dgram+ip@'. 'udp4@<address>[:port1[-port2]]' following <address> is always considered as an IPv4 address but socket type and transport method is forced to "datagram". Depending on the statement using this address, a port or port range can or must be specified. It is considered as an alias of 'stream+ipv4@'. 'udp6@<address>[:port1[-port2]]' following <address> is always considered as an IPv6 address but socket type and transport method is forced to "datagram". Depending on the statement using this address, a port or port range can or must be specified. It is considered as an alias of 'stream+ipv4@'. 'uxdg@<path>' following string is considered as a unix socket <path> but transport method is forced to "datagram". It is considered as an alias of 'dgram+unix@'. 'uxst@<path>' following string is considered as a unix socket <path> but transport method is forced to "stream". It is considered as an alias of 'stream+unix@'. In future versions, other prefixes could be used to specify protocols like QUIC which proposes stream transport based on socket of type "datagram". ```
programming_docs
haproxy Management Guide Management Guide ================ ``` Note to documentation contributors : This document is formatted with 80 columns per line, with even number of spaces for indentation and without tabs. Please follow these rules strictly so that it remains easily printable everywhere. If you add sections, please update the summary below for easier searching. ``` 1. Prerequisites ----------------- ``` In this document it is assumed that the reader has sufficient administration skills on a UNIX-like operating system, uses the shell on a daily basis and is familiar with troubleshooting utilities such as strace and tcpdump. ``` 2. Quick reminder about HAProxy's architecture ----------------------------------------------- ``` HAProxy is a multi-threaded, event-driven, non-blocking daemon. This means is uses event multiplexing to schedule all of its activities instead of relying on the system to schedule between multiple activities. Most of the time it runs as a single process, so the output of "ps aux" on a system will report only one "haproxy" process, unless a soft reload is in progress and an older process is finishing its job in parallel to the new one. It is thus always easy to trace its activity using the strace utility. In order to scale with the number of available processors, by default haproxy will start one worker thread per processor it is allowed to run on. Unless explicitly configured differently, the incoming traffic is spread over all these threads, all running the same event loop. A great care is taken to limit inter-thread dependencies to the strict minimum, so as to try to achieve near-linear scalability. This has some impacts such as the fact that a given connection is served by a single thread. Thus in order to use all available processing capacity, it is needed to have at least as many connections as there are threads, which is almost always granted. HAProxy is designed to isolate itself into a chroot jail during startup, where it cannot perform any file-system access at all. This is also true for the libraries it depends on (eg: libc, libssl, etc). The immediate effect is that a running process will not be able to reload a configuration file to apply changes, instead a new process will be started using the updated configuration file. Some other less obvious effects are that some timezone files or resolver files the libc might attempt to access at run time will not be found, though this should generally not happen as they're not needed after startup. A nice consequence of this principle is that the HAProxy process is totally stateless, and no cleanup is needed after it's killed, so any killing method that works will do the right thing. HAProxy doesn't write log files, but it relies on the standard syslog protocol to send logs to a remote server (which is often located on the same system). HAProxy uses its internal clock to enforce timeouts, that is derived from the system's time but where unexpected drift is corrected. This is done by limiting the time spent waiting in poll() for an event, and measuring the time it really took. In practice it never waits more than one second. This explains why, when running strace over a completely idle process, periodic calls to poll() (or any of its variants) surrounded by two gettimeofday() calls are noticed. They are normal, completely harmless and so cheap that the load they imply is totally undetectable at the system scale, so there's nothing abnormal there. Example : 16:35:40.002320 gettimeofday({1442759740, 2605}, NULL) = 0 16:35:40.002942 epoll_wait(0, {}, 200, 1000) = 0 16:35:41.007542 gettimeofday({1442759741, 7641}, NULL) = 0 16:35:41.007998 gettimeofday({1442759741, 8114}, NULL) = 0 16:35:41.008391 epoll_wait(0, {}, 200, 1000) = 0 16:35:42.011313 gettimeofday({1442759742, 11411}, NULL) = 0 HAProxy is a TCP proxy, not a router. It deals with established connections that have been validated by the kernel, and not with packets of any form nor with sockets in other states (eg: no SYN_RECV nor TIME_WAIT), though their existence may prevent it from binding a port. It relies on the system to accept incoming connections and to initiate outgoing connections. An immediate effect of this is that there is no relation between packets observed on the two sides of a forwarded connection, which can be of different size, numbers and even family. Since a connection may only be accepted from a socket in LISTEN state, all the sockets it is listening to are necessarily visible using the "netstat" utility to show listening sockets. Example : # netstat -ltnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1629/sshd tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2847/haproxy tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 2847/haproxy ``` 3. Starting HAProxy -------------------- ``` HAProxy is started by invoking the "haproxy" program with a number of arguments passed on the command line. The actual syntax is : $ haproxy [<options>]* ``` **where** [<options>]\* is any number of options. An option always starts with '-' ``` followed by one of more letters, and possibly followed by one or multiple extra arguments. Without any option, HAProxy displays the help page with a reminder about supported options. Available options may vary slightly based on the operating system. A fair number of these options overlap with an equivalent one if the "global" section. In this case, the command line always has precedence over the configuration file, so that the command line can be used to quickly enforce some settings without touching the configuration files. The current list of options is : -- <cfgfile>* : all the arguments following "--" are paths to configuration file/directory to be loaded and processed in the declaration order. It is mostly useful when relying on the shell to load many files that are numerically ordered. See also "-f". The difference between "--" and "-f" is that one "-f" must be placed before each file name, while a single "--" is needed before all file names. Both options can be used together, the command line ordering still applies. When more than one file is specified, each file must start on a section boundary, so the first keyword of each file must be one of "global", "defaults", "peers", "listen", "frontend", "backend", and so on. A file cannot contain just a server list for example. -f <cfgfile|cfgdir> : adds <cfgfile> to the list of configuration files to be loaded. If <cfgdir> is a directory, all the files (and only files) it contains are added in lexical order (using LC_COLLATE=C) to the list of configuration files to be loaded ; only files with ".cfg" extension are added, only non hidden files (not prefixed with ".") are added. Configuration files are loaded and processed in their declaration order. This option may be specified multiple times to load multiple files. See also "--". The difference between "--" and "-f" is that one "-f" must be placed before each file name, while a single "--" is needed before all file names. Both options can be used together, the command line ordering still applies. When more than one file is specified, each file must start on a section boundary, so the first keyword of each file must be one of "global", "defaults", "peers", "listen", "frontend", "backend", and so on. A file cannot contain just a server list for example. -C <dir> : changes to directory <dir> before loading configuration files. This is useful when using relative paths. Warning when using wildcards after "--" which are in fact replaced by the shell before starting haproxy. -D : start as a daemon. The process detaches from the current terminal after forking, and errors are not reported anymore in the terminal. It is equivalent to the "daemon" keyword in the "global" section of the configuration. It is recommended to always force it in any init script so that a faulty configuration doesn't prevent the system from booting. -L <name> : change the local peer name to <name>, which defaults to the local hostname. This is used only with peers replication. You can use the variable $HAPROXY_LOCALPEER in the configuration file to reference the peer name. -N <limit> : sets the default per-proxy maxconn to <limit> instead of the builtin default value (usually 2000). Only useful for debugging. -V : enable verbose mode (disables quiet mode). Reverts the effect of "-q" or "quiet". -W : master-worker mode. It is equivalent to the "master-worker" keyword in the "global" section of the configuration. This mode will launch a "master" which will monitor the "workers". Using this mode, you can reload HAProxy directly by sending a SIGUSR2 signal to the master. The master-worker mode is compatible either with the foreground or daemon mode. It is recommended to use this mode with multiprocess and systemd. -Ws : master-worker mode with support of `notify` type of systemd service. This option is only available when HAProxy was built with `USE_SYSTEMD` build option enabled. -c : only performs a check of the configuration files and exits before trying to bind. The exit status is zero if everything is OK, or non-zero if an error is encountered. Presence of warnings will be reported if any. -cc : evaluates a condition as used within a conditional block of the configuration. The exit status is zero if the condition is true, 1 if the condition is false or 2 if an error is encountered. -d : enable debug mode. This disables daemon mode, forces the process to stay in foreground and to show incoming and outgoing events. It must never be used in an init script. -dC[key] : dump the configuration file. It is performed after the lines are tokenized, so comments are stripped and indenting is forced. If a non-zero key is specified, lines are truncated before sensitive/confidential fields, and identifiers and addresses are emitted hashed with this key using the same algorithmm as the one used by the anonymized mode on the CLI. This means that the output may safely be shared with a developer who needs it to figure what's happening in a dump that was anonymized using the same key. Please also see the CLI's "[set anon](#set%20anon)" command. -dD : enable diagnostic mode. This mode will output extra warnings about suspicious configuration statements. This will never prevent startup even in "zero-warning" mode nor change the exit status code. -dG : disable use of getaddrinfo() to resolve host names into addresses. It can be used when suspecting that getaddrinfo() doesn't work as expected. This option was made available because many bogus implementations of getaddrinfo() exist on various systems and cause anomalies that are difficult to troubleshoot. -dK<class[,class]*> : dumps the list of registered keywords in each class. The list of classes is available with "-dKhelp". All classes may be dumped using "-dKall", otherwise a selection of those shown in the help can be specified as a comma-delimited list. The output format will vary depending on what class of keywords is being dumped (e.g. "cfg" will show the known configuration keywords in a format resembling the config file format while "smp" will show sample fetch functions prefixed with a compatibility matrix with each rule set). These may rarely be used as-is by humans but can be of great help for external tools that try to detect the appearance of new keywords at certain places to automatically update some documentation, syntax highlighting files, configuration parsers, API etc. The output format may evolve a bit over time so it is really recommended to use this output mostly to detect differences with previous archives. Note that not all keywords are listed because many keywords have existed long before the different keyword registration subsystems were created, and they do not appear there. However since new keywords are only added via the modern mechanisms, it's reasonably safe to assume that this output may be used to detect language additions with a good accuracy. The keywords are only dumped after the configuration is fully parsed, so that even dynamically created keywords can be dumped. A good way to dump and exit is to run a silent config check on an existing configuration: ./haproxy -dKall -q -c -f foo.cfg If no configuration file is available, using "-f /dev/null" will work as well to dump all default keywords, but then the return status will not be zero since there will be no listener, and will have to be ignored. -dL : dumps the list of dynamic shared libraries that are loaded at the end of the config processing. This will generally also include deep dependencies such as anything loaded from Lua code for example, as well as the executable itself. The list is printed in a format that ought to be easy enough to sanitize to directly produce a tarball of all dependencies. Since it doesn't stop the program's startup, it is recommended to only use it in combination with "-c" and "-q" where only the list of loaded objects will be displayed (or nothing in case of error). In addition, keep in mind that when providing such a package to help with a core file analysis, most libraries are in fact symbolic links that need to be dereferenced when creating the archive: ./haproxy -W -q -c -dL -f foo.cfg | tar -T - -hzcf archive.tgz -dM[<byte>[,]][help|options,...] : forces memory poisoning, and/or changes memory other debugging options. Memory poisonning means that each and every memory region allocated with malloc() or pool_alloc() will be filled with <byte> before being passed to the caller. When <byte> is not specified, it defaults to 0x50 ('P'). While this slightly slows down operations, it is useful to reliably trigger issues resulting from missing initializations in the code that cause random crashes. Note that -dM0 has the effect of turning any malloc() into a calloc(). In any case if a bug appears or disappears when using this option it means there is a bug in haproxy, so please report it. A number of other options are available either alone or after a comma following the byte. The special option "[help](#help)" will list the currently supported options and their current value. Each debugging option may be forced on or off. The most optimal options are usually chosen at build time based on the operating system and do not need to be adjusted, unless suggested by a developer. Supported debugging options include (set/clear): - fail / no-fail: This enables randomly failing memory allocations, in conjunction with the global "tune.fail-alloc" setting. This is used to detect missing error checks in the code. - no-merge / merge: By default, pools of very similar sizes are merged, resulting in more efficiency, but this complicates the analysis of certain memory dumps. This option allows to disable this mechanism, and may slightly increase the memory usage. - cold-first / hot-first: In order to optimize the CPU cache hit ratio, by default the most recently released objects ("hot") are recycled for new allocations. But doing so also complicates analysis of memory dumps and may hide use-after-free bugs. This option allows to instead pick the coldest objects first, which may result in a slight increase of CPU usage. - integrity / no-integrity: When this option is enabled, memory integrity checks are enabled on the allocated area to verify that it hasn't been modified since it was last released. This works best with "no-merge", "cold-first" and "tag". Enabling this option will slightly increase the CPU usage. - no-global / global: Depending on the operating system, a process-wide global memory cache may be enabled if it is estimated that the standard allocator is too slow or inefficient with threads. This option allows to forcefully disable it or enable it. Disabling it may result in a CPU usage increase with inefficient allocators. Enabling it may result in a higher memory usage with efficient allocators. - no-cache / cache: Each thread uses a very fast local object cache for allocations, which is always enabled by default. This option allows to disable it. Since the global cache also passes via the local caches, this will effectively result in disabling all caches and allocating directly from the default allocator. This may result in a significant increase of CPU usage, but may also result in small memory savings on tiny systems. - caller / no-caller: Enabling this option reserves some extra space in each allocated object to store the address of the last caller that allocated or released it. This helps developers go back in time when analysing memory dumps and to guess how something unexpected happened. - tag / no-tag: Enabling this option reserves some extra space in each allocated object to store a tag that allows to detect bugs such as double-free, freeing an invalid object, and buffer overflows. It offers much stronger reliability guarantees at the expense of 4 or 8 extra bytes per allocation. It usually is the first step to detect memory corruption. - poison / no-poison: Enabling this option will fill allocated objects with a fixed pattern that will make sure that some accidental values such as 0 will not be present if a newly added field was mistakenly forgotten in an initialization routine. Such bugs tend to rarely reproduce, especially when pools are not merged. This is normally enabled by directly passing the byte's value to -dM but using this option allows to disable/enable use of a previously set value. -dS : disable use of the splice() system call. It is equivalent to the "global" section's "nosplice" keyword. This may be used when splice() is suspected to behave improperly or to cause performance issues, or when using strace to see the forwarded data (which do not appear when using splice()). -dV : disable SSL verify on the server side. It is equivalent to having "ssl-server-verify none" in the "global" section. This is useful when trying to reproduce production issues out of the production environment. Never use this in an init script as it degrades SSL security to the servers. -dW : if set, haproxy will refuse to start if any warning was emitted while processing the configuration. This helps detect subtle mistakes and keep the configuration clean and portable across versions. It is recommended to set this option in service scripts when configurations are managed by humans, but it is recommended not to use it with generated configurations, which tend to emit more warnings. It may be combined with "-c" to cause warnings in checked configurations to fail. This is equivalent to global option "zero-warning". -db : disable background mode and multi-process mode. The process remains in foreground. It is mainly used during development or during small tests, as Ctrl-C is enough to stop the process. Never use it in an init script. -de : disable the use of the "epoll" poller. It is equivalent to the "global" section's keyword "noepoll". It is mostly useful when suspecting a bug related to this poller. On systems supporting epoll, the fallback will generally be the "poll" poller. -dk : disable the use of the "kqueue" poller. It is equivalent to the "global" section's keyword "nokqueue". It is mostly useful when suspecting a bug related to this poller. On systems supporting kqueue, the fallback will generally be the "poll" poller. -dp : disable the use of the "poll" poller. It is equivalent to the "global" section's keyword "nopoll". It is mostly useful when suspecting a bug related to this poller. On systems supporting poll, the fallback will generally be the "select" poller, which cannot be disabled and is limited to 1024 file descriptors. -dr : ignore server address resolution failures. It is very common when validating a configuration out of production not to have access to the same resolvers and to fail on server address resolution, making it difficult to test a configuration. This option simply appends the "none" method to the list of address resolution methods for all servers, ensuring that even if the libc fails to resolve an address, the startup sequence is not interrupted. -m <limit> : limit the total allocatable memory to <limit> megabytes across all processes. This may cause some connection refusals or some slowdowns depending on the amount of memory needed for normal operations. This is mostly used to force the processes to work in a constrained resource usage scenario. It is important to note that the memory is not shared between processes, so in a multi-process scenario, this value is first divided by global.nbproc before forking. -n <limit> : limits the per-process connection limit to <limit>. This is equivalent to the global section's keyword "maxconn". It has precedence over this keyword. This may be used to quickly force lower limits to avoid a service outage on systems where resource limits are too low. -p <file> : write all processes' pids into <file> during startup. This is equivalent to the "global" section's keyword "pidfile". The file is opened before entering the chroot jail, and after doing the chdir() implied by "-C". Each pid appears on its own line. -q : set "quiet" mode. This disables some messages during the configuration parsing and during startup. It can be used in combination with "-c" to just check if a configuration file is valid or not. -S <bind>[,bind_options...]: in master-worker mode, bind a master CLI, which allows the access to every processes, running or leaving ones. For security reasons, it is recommended to bind the master CLI to a local UNIX socket. The bind options are the same as the keyword "bind" in the configuration file with words separated by commas instead of spaces. Note that this socket can't be used to retrieve the listening sockets from an old process during a seamless reload. -sf <pid>* : send the "finish" signal (SIGUSR1) to older processes after boot completion to ask them to finish what they are doing and to leave. <pid> is a list of pids to signal (one per argument). The list ends on any option starting with a "-". It is not a problem if the list of pids is empty, so that it can be built on the fly based on the result of a command like "pidof" or "pgrep". QUIC connections will be aborted. -st <pid>* : send the "terminate" signal (SIGTERM) to older processes after boot completion to terminate them immediately without finishing what they were doing. <pid> is a list of pids to signal (one per argument). The list is ends on any option starting with a "-". It is not a problem if the list of pids is empty, so that it can be built on the fly based on the result of a command like "pidof" or "pgrep". -v : report the version and build date. -vv : display the version, build options, libraries versions and usable pollers. This output is systematically requested when filing a bug report. -x <unix_socket> : connect to the specified socket and try to retrieve any listening sockets from the old process, and use them instead of trying to bind new ones. This is useful to avoid missing any new connection when reloading the configuration on Linux. The capability must be enable on the stats socket using "expose-fd listeners" in your configuration. In master-worker mode, the master will use this option upon a reload with the "sockpair@" syntax, which allows the master to connect directly to a worker without using stats socket declared in the configuration. A safe way to start HAProxy from an init file consists in forcing the daemon mode, storing existing pids to a pid file and using this pid file to notify older processes to finish before leaving : haproxy -f /etc/haproxy.cfg \ -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) When the configuration is split into a few specific files (eg: tcp vs http), it is recommended to use the "-f" option : haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \ -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \ -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \ -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) When an unknown number of files is expected, such as customer-specific files, it is recommended to assign them a name starting with a fixed-size sequence number and to use "--" to load them, possibly after loading some defaults : haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \ -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \ -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \ -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) \ -f /etc/haproxy/default-customers.cfg -- /etc/haproxy/customers/* Sometimes a failure to start may happen for whatever reason. Then it is important to verify if the version of HAProxy you are invoking is the expected version and if it supports the features you are expecting (eg: SSL, PCRE, compression, Lua, etc). This can be verified using "haproxy -vv". Some important information such as certain build options, the target system and the versions of the libraries being used are reported there. It is also what you will systematically be asked for when posting a bug report : $ haproxy -vv HAProxy version 1.6-dev7-a088d3-4 2015/10/08 Copyright 2000-2015 Willy Tarreau <[email protected]> Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -pg -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement \ -DBUFSIZE=8030 -DMAXREWRITE=1030 -DSO_MARK=36 -DTCP_REPAIR=19 OPTIONS = USE_ZLIB=1 USE_DLMALLOC=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 Default settings : maxconn = 2000, bufsize = 8030, maxrewrite = 1030, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.6 Compression algorithms supported : identity("identity"), deflate("deflate"), \ raw-deflate("deflate"), gzip("gzip") Built with OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015 Running on OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.12 2011-01-15 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with Lua version : Lua 5.3.1 Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. The relevant information that many non-developer users can verify here are : - the version : 1.6-dev7-a088d3-4 above means the code is currently at commit ID "a088d3" which is the 4th one after after official version "1.6-dev7". Version 1.6-dev7 would show as "1.6-dev7-8c1ad7". What matters here is in fact "1.6-dev7". This is the 7th development version of what will become version 1.6 in the future. A development version not suitable for use in production (unless you know exactly what you are doing). A stable version will show as a 3-numbers version, such as "1.5.14-16f863", indicating the 14th level of fix on top of version 1.5. This is a production-ready version. - the release date : 2015/10/08. It is represented in the universal year/month/day format. Here this means August 8th, 2015. Given that stable releases are issued every few months (1-2 months at the beginning, sometimes 6 months once the product becomes very stable), if you're seeing an old date here, it means you're probably affected by a number of bugs or security issues that have since been fixed and that it might be worth checking on the official site. - build options : they are relevant to people who build their packages themselves, they can explain why things are not behaving as expected. For example the development version above was built for Linux 2.6.28 or later, targeting a generic CPU (no CPU-specific optimizations), and lacks any code optimization (-O0) so it will perform poorly in terms of performance. - libraries versions : zlib version is reported as found in the library itself. In general zlib is considered a very stable product and upgrades are almost never needed. OpenSSL reports two versions, the version used at build time and the one being used, as found on the system. These ones may differ by the last letter but never by the numbers. The build date is also reported because most OpenSSL bugs are security issues and need to be taken seriously, so this library absolutely needs to be kept up to date. Seeing a 4-months old version here is highly suspicious and indeed an update was missed. PCRE provides very fast regular expressions and is highly recommended. Certain of its extensions such as JIT are not present in all versions and still young so some people prefer not to build with them, which is why the build status is reported as well. Regarding the Lua scripting language, HAProxy expects version 5.3 which is very young since it was released a little time before HAProxy 1.6. It is important to check on the Lua web site if some fixes are proposed for this branch. - Available polling systems will affect the process's scalability when dealing with more than about one thousand of concurrent connections. These ones are only available when the correct system was indicated in the TARGET variable during the build. The "epoll" mechanism is highly recommended on Linux, and the kqueue mechanism is highly recommended on BSD. Lacking them will result in poll() or even select() being used, causing a high CPU usage when dealing with a lot of connections. ``` 4. Stopping and restarting HAProxy ----------------------------------- ``` HAProxy supports a graceful and a hard stop. The hard stop is simple, when the SIGTERM signal is sent to the haproxy process, it immediately quits and all established connections are closed. The graceful stop is triggered when the SIGUSR1 signal is sent to the haproxy process. It consists in only unbinding from listening ports, but continue to process existing connections until they close. Once the last connection is closed, the process leaves. The hard stop method is used for the "stop" or "restart" actions of the service management script. The graceful stop is used for the "[reload](#reload)" action which tries to seamlessly reload a new configuration in a new process. Both of these signals may be sent by the new haproxy process itself during a reload or restart, so that they are sent at the latest possible moment and only if absolutely required. This is what is performed by the "-st" (hard) and "-sf" (graceful) options respectively. In master-worker mode, it is not needed to start a new haproxy process in order to reload the configuration. The master process reacts to the SIGUSR2 signal by reexecuting itself with the -sf parameter followed by the PIDs of the workers. The master will then parse the configuration file and fork new workers. To understand better how these signals are used, it is important to understand the whole restart mechanism. First, an existing haproxy process is running. The administrator uses a system specific command such as "/etc/init.d/haproxy reload" to indicate they want to take the new configuration file into effect. What happens then is the following. First, the service script (/etc/init.d/haproxy or equivalent) will verify that the configuration file parses correctly using "haproxy -c". After that it will try to start haproxy with this configuration file, using "-st" or "-sf". Then HAProxy tries to bind to all listening ports. If some fatal errors happen (eg: address not present on the system, permission denied), the process quits with an error. If a socket binding fails because a port is already in use, then the process will first send a SIGTTOU signal to all the pids specified in the "-st" or "-sf" pid list. This is what is called the "pause" signal. It instructs all existing haproxy processes to temporarily stop listening to their ports so that the new process can try to bind again. During this time, the old process continues to process existing connections. If the binding still fails (because for example a port is shared with another daemon), then the new process sends a SIGTTIN signal to the old processes to instruct them to resume operations just as if nothing happened. The old processes will then restart listening to the ports and continue to accept connections. Note that this mechanism is system dependent and some operating systems may not support it in multi-process mode. If the new process manages to bind correctly to all ports, then it sends either the SIGTERM (hard stop in case of "-st") or the SIGUSR1 (graceful stop in case of "-sf") to all processes to notify them that it is now in charge of operations and that the old processes will have to leave, either immediately or once they have finished their job. It is important to note that during this timeframe, there are two small windows of a few milliseconds each where it is possible that a few connection failures will be noticed during high loads. Typically observed failure rates are around 1 failure during a reload operation every 10000 new connections per second, which means that a heavily loaded site running at 30000 new connections per second may see about 3 failed connection upon every reload. The two situations where this happens are : - if the new process fails to bind due to the presence of the old process, it will first have to go through the SIGTTOU+SIGTTIN sequence, which typically lasts about one millisecond for a few tens of frontends, and during which some ports will not be bound to the old process and not yet bound to the new one. HAProxy works around this on systems that support the SO_REUSEPORT socket options, as it allows the new process to bind without first asking the old one to unbind. Most BSD systems have been supporting this almost forever. Linux has been supporting this in version 2.0 and dropped it around 2.2, but some patches were floating around by then. It was reintroduced in kernel 3.9, so if you are observing a connection failure rate above the one mentioned above, please ensure that your kernel is 3.9 or newer, or that relevant patches were backported to your kernel (less likely). - when the old processes close the listening ports, the kernel may not always redistribute any pending connection that was remaining in the socket's backlog. Under high loads, a SYN packet may happen just before the socket is closed, and will lead to an RST packet being sent to the client. In some critical environments where even one drop is not acceptable, these ones are sometimes dealt with using firewall rules to block SYN packets during the reload, forcing the client to retransmit. This is totally system-dependent, as some systems might be able to visit other listening queues and avoid this RST. A second case concerns the ACK from the client on a local socket that was in SYN_RECV state just before the close. This ACK will lead to an RST packet while the haproxy process is still not aware of it. This one is harder to get rid of, though the firewall filtering rules mentioned above will work well if applied one second or so before restarting the process. For the vast majority of users, such drops will never ever happen since they don't have enough load to trigger the race conditions. And for most high traffic users, the failure rate is still fairly within the noise margin provided that at least SO_REUSEPORT is properly supported on their systems. QUIC limitations: soft-stop is not supported. In case of reload, QUIC connections will not be preserved. ``` 5. File-descriptor limitations ------------------------------- ``` In order to ensure that all incoming connections will successfully be served, HAProxy computes at load time the total number of file descriptors that will be needed during the process's life. A regular Unix process is generally granted 1024 file descriptors by default, and a privileged process can raise this limit itself. This is one reason for starting HAProxy as root and letting it adjust the limit. The default limit of 1024 file descriptors roughly allow about 500 concurrent connections to be processed. The computation is based on the global maxconn parameter which limits the total number of connections per process, the number of listeners, the number of servers which have a health check enabled, the agent checks, the peers, the loggers and possibly a few other technical requirements. A simple rough estimate of this number consists in simply doubling the maxconn value and adding a few tens to get the approximate number of file descriptors needed. Originally HAProxy did not know how to compute this value, and it was necessary to pass the value using the "ulimit-n" setting in the global section. This explains why even today a lot of configurations are seen with this setting present. Unfortunately it was often miscalculated resulting in connection failures when approaching maxconn instead of throttling incoming connection while waiting for the needed resources. For this reason it is important to remove any vestigial "ulimit-n" setting that can remain from very old versions. Raising the number of file descriptors to accept even moderate loads is mandatory but comes with some OS-specific adjustments. First, the select() polling system is limited to 1024 file descriptors. In fact on Linux it used to be capable of handling more but since certain OS ship with excessively restrictive SELinux policies forbidding the use of select() with more than 1024 file descriptors, HAProxy now refuses to start in this case in order to avoid any issue at run time. On all supported operating systems, poll() is available and will not suffer from this limitation. It is automatically picked so there is nothing to do to get a working configuration. But poll's becomes very slow when the number of file descriptors increases. While HAProxy does its best to limit this performance impact (eg: via the use of the internal file descriptor cache and batched processing), a good rule of thumb is that using poll() with more than a thousand concurrent connections will use a lot of CPU. For Linux systems base on kernels 2.6 and above, the epoll() system call will be used. It's a much more scalable mechanism relying on callbacks in the kernel that guarantee a constant wake up time regardless of the number of registered monitored file descriptors. It is automatically used where detected, provided that HAProxy had been built for one of the Linux flavors. Its presence and support can be verified using "haproxy -vv". For BSD systems which support it, kqueue() is available as an alternative. It is much faster than poll() and even slightly faster than epoll() thanks to its batched handling of changes. At least FreeBSD and OpenBSD support it. Just like with Linux's epoll(), its support and availability are reported in the output of "haproxy -vv". Having a good poller is one thing, but it is mandatory that the process can reach the limits. When HAProxy starts, it immediately sets the new process's file descriptor limits and verifies if it succeeds. In case of failure, it reports it before forking so that the administrator can see the problem. As long as the process is started by as root, there should be no reason for this setting to fail. However, it can fail if the process is started by an unprivileged user. If there is a compelling reason for *not* starting haproxy as root (eg: started by end users, or by a per-application account), then the file descriptor limit can be raised by the system administrator for this specific user. The effectiveness of the setting can be verified by issuing "ulimit -n" from the user's command line. It should reflect the new limit. Warning: when an unprivileged user's limits are changed in this user's account, it is fairly common that these values are only considered when the user logs in and not at all in some scripts run at system boot time nor in crontabs. This is totally dependent on the operating system, keep in mind to check "ulimit -n" before starting haproxy when running this way. The general advice is never to start haproxy as an unprivileged user for production purposes. Another good reason is that it prevents haproxy from enabling some security protections. Once it is certain that the system will allow the haproxy process to use the requested number of file descriptors, two new system-specific limits may be encountered. The first one is the system-wide file descriptor limit, which is the total number of file descriptors opened on the system, covering all processes. When this limit is reached, accept() or socket() will typically return ENFILE. The second one is the per-process hard limit on the number of file descriptors, it prevents setrlimit() from being set higher. Both are very dependent on the operating system. On Linux, the system limit is set at boot based on the amount of memory. It can be changed with the "fs.file-max" sysctl. And the per-process hard limit is set to 1048576 by default, but it can be changed using the "fs.nr_open" sysctl. File descriptor limitations may be observed on a running process when they are set too low. The strace utility will report that accept() and socket() return "-1 EMFILE" when the process's limits have been reached. In this case, simply raising the "ulimit-n" value (or removing it) will solve the problem. If these system calls return "-1 ENFILE" then it means that the kernel's limits have been reached and that something must be done on a system-wide parameter. These trouble must absolutely be addressed, as they result in high CPU usage (when accept() fails) and failed connections that are generally visible to the user. One solution also consists in lowering the global maxconn value to enforce serialization, and possibly to disable HTTP keep-alive to force connections to be released and reused faster. ``` 6. Memory management --------------------- ``` HAProxy uses a simple and fast pool-based memory management. Since it relies on a small number of different object types, it's much more efficient to pick new objects from a pool which already contains objects of the appropriate size than to call malloc() for each different size. The pools are organized as a stack or LIFO, so that newly allocated objects are taken from recently released objects still hot in the CPU caches. Pools of similar sizes are merged together, in order to limit memory fragmentation. By default, since the focus is set on performance, each released object is put back into the pool it came from, and allocated objects are never freed since they are expected to be reused very soon. On the CLI, it is possible to check how memory is being used in pools thanks to the "[show pools](#show%20pools)" command : > show pools Dumping pools usage. Use SIGQUIT to flush them. - Pool cache_st (16 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccc40=03 [SHARED] - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 0 failures, 2 users, @0x9ccac0=00 [SHARED] - Pool comp_state (48 bytes) : 3 allocated (144 bytes), 3 used, 0 failures, 5 users, @0x9cccc0=04 [SHARED] - Pool filter (64 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 3 users, @0x9ccbc0=02 [SHARED] - Pool vars (80 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccb40=01 [SHARED] - Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9cd240=15 [SHARED] - Pool task (144 bytes) : 55 allocated (7920 bytes), 55 used, 0 failures, 1 users, @0x9cd040=11 [SHARED] - Pool session (160 bytes) : 1 allocated (160 bytes), 1 used, 0 failures, 1 users, @0x9cd140=13 [SHARED] - Pool h2s (208 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccec0=08 [SHARED] - Pool h2c (288 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cce40=07 [SHARED] - Pool spoe_ctx (304 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 2 users, @0x9ccf40=09 [SHARED] - Pool connection (400 bytes) : 2 allocated (800 bytes), 2 used, 0 failures, 1 users, @0x9cd1c0=14 [SHARED] - Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd340=17 [SHARED] - Pool dns_resolut (480 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccdc0=06 [SHARED] - Pool dns_answer_ (576 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9ccd40=05 [SHARED] - Pool stream (960 bytes) : 1 allocated (960 bytes), 1 used, 0 failures, 1 users, @0x9cd0c0=12 [SHARED] - Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users, @0x9cd2c0=16 [SHARED] - Pool buffer (8030 bytes) : 3 allocated (24090 bytes), 2 used, 0 failures, 1 users, @0x9cd3c0=18 [SHARED] - Pool trash (8062 bytes) : 1 allocated (8062 bytes), 1 used, 0 failures, 1 users, @0x9cd440=19 Total: 19 pools, 42296 bytes allocated, 34266 used. The pool name is only indicative, it's the name of the first object type using this pool. The size in parenthesis is the object size for objects in this pool. Object sizes are always rounded up to the closest multiple of 16 bytes. The number of objects currently allocated and the equivalent number of bytes is reported so that it is easy to know which pool is responsible for the highest memory usage. The number of objects currently in use is reported as well in the "used" field. The difference between "allocated" and "used" corresponds to the objects that have been freed and are available for immediate use. The address at the end of the line is the pool's address, and the following number is the pool index when it exists, or is reported as -1 if no index was assigned. It is possible to limit the amount of memory allocated per process using the "-m" command line option, followed by a number of megabytes. It covers all of the process's addressable space, so that includes memory used by some libraries as well as the stack, but it is a reliable limit when building a resource constrained system. It works the same way as "ulimit -v" on systems which have it, or "ulimit -d" for the other ones. If a memory allocation fails due to the memory limit being reached or because the system doesn't have any enough memory, then haproxy will first start to free all available objects from all pools before attempting to allocate memory again. This mechanism of releasing unused memory can be triggered by sending the signal SIGQUIT to the haproxy process. When doing so, the pools state prior to the flush will also be reported to stderr when the process runs in foreground. During a reload operation, the process switched to the graceful stop state also automatically performs some flushes after releasing any connection so that all possible memory is released to save it for the new process. ``` 7. CPU usage ------------- ``` HAProxy normally spends most of its time in the system and a smaller part in userland. A finely tuned 3.5 GHz CPU can sustain a rate about 80000 end-to-end connection setups and closes per second at 100% CPU on a single core. When one core is saturated, typical figures are : - 95% system, 5% user for long TCP connections or large HTTP objects - 85% system and 15% user for short TCP connections or small HTTP objects in close mode - 70% system and 30% user for small HTTP objects in keep-alive mode The amount of rules processing and regular expressions will increase the user land part. The presence of firewall rules, connection tracking, complex routing tables in the system will instead increase the system part. On most systems, the CPU time observed during network transfers can be cut in 4 parts : - the interrupt part, which concerns all the processing performed upon I/O receipt, before the target process is even known. Typically Rx packets are accounted for in interrupt. On some systems such as Linux where interrupt processing may be deferred to a dedicated thread, it can appear as softirq, and the thread is called ksoftirqd/0 (for CPU 0). The CPU taking care of this load is generally defined by the hardware settings, though in the case of softirq it is often possible to remap the processing to another CPU. This interrupt part will often be perceived as parasitic since it's not associated with any process, but it actually is some processing being done to prepare the work for the process. - the system part, which concerns all the processing done using kernel code called from userland. System calls are accounted as system for example. All synchronously delivered Tx packets will be accounted for as system time. If some packets have to be deferred due to queues filling up, they may then be processed in interrupt context later (eg: upon receipt of an ACK opening a TCP window). - the user part, which exclusively runs application code in userland. HAProxy runs exclusively in this part, though it makes heavy use of system calls. Rules processing, regular expressions, compression, encryption all add to the user portion of CPU consumption. - the idle part, which is what the CPU does when there is nothing to do. For example HAProxy waits for an incoming connection, or waits for some data to leave, meaning the system is waiting for an ACK from the client to push these data. In practice regarding HAProxy's activity, it is in general reasonably accurate (but totally inexact) to consider that interrupt/softirq are caused by Rx processing in kernel drivers, that user-land is caused by layer 7 processing in HAProxy, and that system time is caused by network processing on the Tx path. Since HAProxy runs around an event loop, it waits for new events using poll() (or any alternative) and processes all these events as fast as possible before going back to poll() waiting for new events. It measures the time spent waiting in poll() compared to the time spent doing processing events. The ratio of polling time vs total time is called the "idle" time, it's the amount of time spent waiting for something to happen. This ratio is reported in the stats page on the "idle" line, or "Idle_pct" on the CLI. When it's close to 100%, it means the load is extremely low. When it's close to 0%, it means that there is constantly some activity. While it cannot be very accurate on an overloaded system due to other processes possibly preempting the CPU from the haproxy process, it still provides a good estimate about how HAProxy considers it is working : if the load is low and the idle ratio is low as well, it may indicate that HAProxy has a lot of work to do, possibly due to very expensive rules that have to be processed. Conversely, if HAProxy indicates the idle is close to 100% while things are slow, it means that it cannot do anything to speed things up because it is already waiting for incoming data to process. In the example below, haproxy is completely idle : $ echo "[show info](#show%20info)" | socat - /var/run/haproxy.sock | grep ^Idle Idle_pct: 100 When the idle ratio starts to become very low, it is important to tune the system and place processes and interrupts correctly to save the most possible CPU resources for all tasks. If a firewall is present, it may be worth trying to disable it or to tune it to ensure it is not responsible for a large part of the performance limitation. It's worth noting that unloading a stateful firewall generally reduces both the amount of interrupt/softirq and of system usage since such firewalls act both on the Rx and the Tx paths. On Linux, unloading the nf_conntrack and ip_conntrack modules will show whether there is anything to gain. If so, then the module runs with default settings and you'll have to figure how to tune it for better performance. In general this consists in considerably increasing the hash table size. On FreeBSD, "pfctl -d" will disable the "pf" firewall and its stateful engine at the same time. If it is observed that a lot of time is spent in interrupt/softirq, it is important to ensure that they don't run on the same CPU. Most systems tend to pin the tasks on the CPU where they receive the network traffic because for certain workloads it improves things. But with heavily network-bound workloads it is the opposite as the haproxy process will have to fight against its kernel counterpart. Pinning haproxy to one CPU core and the interrupts to another one, all sharing the same L3 cache tends to sensibly increase network performance because in practice the amount of work for haproxy and the network stack are quite close, so they can almost fill an entire CPU each. On Linux this is done using taskset (for haproxy) or using cpu-map (from the haproxy config), and the interrupts are assigned under /proc/irq. Many network interfaces support multiple queues and multiple interrupts. In general it helps to spread them across a small number of CPU cores provided they all share the same L3 cache. Please always stop irq_balance which always does the worst possible thing on such workloads. For CPU-bound workloads consisting in a lot of SSL traffic or a lot of compression, it may be worth using multiple processes dedicated to certain tasks, though there is no universal rule here and experimentation will have to be performed. In order to increase the CPU capacity, it is possible to make HAProxy run as several processes, using the "nbproc" directive in the global section. There are some limitations though : - health checks are run per process, so the target servers will get as many checks as there are running processes ; - maxconn values and queues are per-process so the correct value must be set to avoid overloading the servers ; - outgoing connections should avoid using port ranges to avoid conflicts - stick-tables are per process and are not shared between processes ; - each peers section may only run on a single process at a time ; - the CLI operations will only act on a single process at a time. With this in mind, it appears that the easiest setup often consists in having one first layer running on multiple processes and in charge for the heavy processing, passing the traffic to a second layer running in a single process. This mechanism is suited to SSL and compression which are the two CPU-heavy features. Instances can easily be chained over UNIX sockets (which are cheaper than TCP sockets and which do not waste ports), and the proxy protocol which is useful to pass client information to the next stage. When doing so, it is generally a good idea to bind all the single-process tasks to process number 1 and extra tasks to next processes, as this will make it easier to generate similar configurations for different machines. On Linux versions 3.9 and above, running HAProxy in multi-process mode is much more efficient when each process uses a distinct listening socket on the same IP:port ; this will make the kernel evenly distribute the load across all processes instead of waking them all up. Please check the "process" option of the "bind" keyword lines in the configuration manual for more information. ``` 8. Logging ----------- ``` For logging, HAProxy always relies on a syslog server since it does not perform any file-system access. The standard way of using it is to send logs over UDP to the log server (by default on port 514). Very commonly this is configured to 127.0.0.1 where the local syslog daemon is running, but it's also used over the network to log to a central server. The central server provides additional benefits especially in active-active scenarios where it is desirable to keep the logs merged in arrival order. HAProxy may also make use of a UNIX socket to send its logs to the local syslog daemon, but it is not recommended at all, because if the syslog server is restarted while haproxy runs, the socket will be replaced and new logs will be lost. Since HAProxy will be isolated inside a chroot jail, it will not have the ability to reconnect to the new socket. It has also been observed in field that the log buffers in use on UNIX sockets are very small and lead to lost messages even at very light loads. But this can be fine for testing however. It is recommended to add the following directive to the "global" section to make HAProxy log to the local daemon using facility "local0" : log 127.0.0.1:514 local0 and then to add the following one to each "defaults" section or to each frontend and backend section : log global This way, all logs will be centralized through the global definition of where the log server is. Some syslog daemons do not listen to UDP traffic by default, so depending on the daemon being used, the syntax to enable this will vary : - on sysklogd, you need to pass argument "-r" on the daemon's command line so that it listens to a UDP socket for "remote" logs ; note that there is no way to limit it to address 127.0.0.1 so it will also receive logs from remote systems ; - on rsyslogd, the following lines must be added to the configuration file : $ModLoad imudp $UDPServerAddress * $UDPServerRun 514 - on syslog-ng, a new source can be created the following way, it then needs to be added as a valid source in one of the "log" directives : source s_udp { udp(ip(127.0.0.1) port(514)); }; Please consult your syslog daemon's manual for more information. If no logs are seen in the system's log files, please consider the following tests : - restart haproxy. Each frontend and backend logs one line indicating it's starting. If these logs are received, it means logs are working. - run "strace -tt -s100 -etrace=sendmsg -p <haproxy's pid>" and perform some activity that you expect to be logged. You should see the log messages being sent using sendmsg() there. If they don't appear, restart using strace on top of haproxy. If you still see no logs, it definitely means that something is wrong in your configuration. - run tcpdump to watch for port 514, for example on the loopback interface if the traffic is being sent locally : "tcpdump -As0 -ni lo port 514". If the packets are seen there, it's the proof they're sent then the syslogd daemon needs to be troubleshooted. While traffic logs are sent from the frontends (where the incoming connections are accepted), backends also need to be able to send logs in order to report a server state change consecutive to a health check. Please consult HAProxy's configuration manual for more information regarding all possible log settings. It is convenient to chose a facility that is not used by other daemons. HAProxy examples often suggest "local0" for traffic logs and "local1" for admin logs because they're never seen in field. A single facility would be enough as well. Having separate logs is convenient for log analysis, but it's also important to remember that logs may sometimes convey confidential information, and as such they must not be mixed with other logs that may accidentally be handed out to unauthorized people. For in-field troubleshooting without impacting the server's capacity too much, it is recommended to make use of the "halog" utility provided with HAProxy. This is sort of a grep-like utility designed to process HAProxy log files at a very fast data rate. Typical figures range between 1 and 2 GB of logs per second. It is capable of extracting only certain logs (eg: search for some classes of HTTP status codes, connection termination status, search by response time ranges, look for errors only), count lines, limit the output to a number of lines, and perform some more advanced statistics such as sorting servers by response time or error counts, sorting URLs by time or count, sorting client addresses by access count, and so on. It is pretty convenient to quickly spot anomalies such as a bot looping on the site, and block them. ``` 9. Statistics and monitoring ----------------------------- ``` It is possible to query HAProxy about its status. The most commonly used mechanism is the HTTP statistics page. This page also exposes an alternative CSV output format for monitoring tools. The same format is provided on the Unix socket. Statistics are regroup in categories labelled as domains, corresponding to the multiple components of HAProxy. There are two domains available: proxy and dns. If not specified, the proxy domain is selected. Note that only the proxy statistics are printed on the HTTP page. ``` ### 9.1. CSV format ``` The statistics may be consulted either from the unix socket or from the HTTP page. Both means provide a CSV format whose fields follow. The first line begins with a sharp ('#') and has one word per comma-delimited field which represents the title of the column. All other lines starting at the second one use a classical CSV format using a comma as the delimiter, and the double quote ('"') as an optional text delimiter, but only if the enclosed text is ambiguous (if it contains a quote or a comma). The double-quote character ('"') in the text is doubled ('""'), which is the format that most tools recognize. Please do not insert any column before these ones in order not to break tools which use hard-coded column positions. For proxy statistics, after each field name, the types which may have a value for that field are specified in brackets. The types are L (Listeners), F (Frontends), B (Backends), and S (Servers). There is a fixed set of static fields that are always available in the same order. A column containing the character '-' delimits the end of the static fields, after which presence or order of the fields are not guaranteed. Here is the list of static fields using the proxy statistics domain: 0. pxname [LFBS]: proxy name 1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend, any name for server/listener) 2. qcur [..BS]: current queued requests. For the backend this reports the number queued without a server assigned. 3. qmax [..BS]: max value of qcur 4. scur [LFBS]: current sessions 5. smax [LFBS]: max sessions 6. slim [LFBS]: configured session limit 7. stot [LFBS]: cumulative number of sessions 8. bin [LFBS]: bytes in 9. bout [LFBS]: bytes out 10. dreq [LFB.]: requests denied because of security concerns. - For tcp this is because of a matched tcp-request content rule. - For http this is because of a matched http-request or tarpit rule. 11. dresp [LFBS]: responses denied because of security concerns. - For http this is because of a matched http-request rule, or "option checkcache". 12. ereq [LF..]: request errors. Some of the possible causes are: - early termination from the client, before the request has been sent. - read error from the client - client timeout - client closed connection - various bad requests from the client. - request was tarpitted. 13. econ [..BS]: number of requests that encountered an error trying to connect to a backend server. The backend stat is the sum of the stat for all servers of that backend, plus any connection errors not associated with a particular server (such as the backend having no active servers). 14. eresp [..BS]: response errors. srv_abrt will be counted here also. Some other errors are: - write error on the client socket (won't be counted for the server stat) - failure applying filters to the response. 15. wretr [..BS]: number of times a connection to a server was retried. 16. wredis [..BS]: number of times a request was redispatched to another server. The server value counts the number of times that server was switched away from. 17. status [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)/MAINT(resolution)...) 18. weight [..BS]: total effective weight (backend), effective weight (server) 19. act [..BS]: number of active servers (backend), server is active (server) 20. bck [..BS]: number of backup servers (backend), server is backup (server) 21. chkfail [...S]: number of failed checks. (Only counts checks failed when the server is up.) 22. chkdown [..BS]: number of UP->DOWN transitions. The backend counter counts transitions to the whole backend being down, rather than the sum of the counters for each server. 23. lastchg [..BS]: number of seconds since the last UP<->DOWN transition 24. downtime [..BS]: total downtime (in seconds). The value for the backend is the downtime for the whole backend, not the sum of the server downtime. 25. qlimit [...S]: configured maxqueue for the server, or nothing in the value is 0 (default, meaning no limit) 26. pid [LFBS]: process id (0 for first instance, 1 for second, ...) 27. iid [LFBS]: unique proxy id 28. sid [L..S]: server id (unique inside a proxy) 29. throttle [...S]: current throttle percentage for the server, when slowstart is active, or no value if not in slowstart. 30. lbtot [..BS]: total number of times a server was selected, either for new sessions, or when re-dispatching. The server counter is the number of times that server was selected. 31. tracked [...S]: id of proxy/server if tracking is enabled. 32. type [LFBS]: (0=frontend, 1=backend, 2=server, 3=socket/listener) 33. rate [.FBS]: number of sessions per second over last elapsed second 34. rate_lim [.F..]: configured limit on new sessions per second 35. rate_max [.FBS]: max number of new sessions per second 36. check_status [...S]: status of last health check, one of: UNK -> unknown INI -> initializing SOCKERR -> socket error L4OK -> check passed on layer 4, no upper layers testing enabled L4TOUT -> layer 1-4 timeout L4CON -> layer 1-4 connection problem, for example "Connection refused" (tcp rst) or "No route to host" (icmp) L6OK -> check passed on layer 6 L6TOUT -> layer 6 (SSL) timeout L6RSP -> layer 6 invalid response - protocol error L7OK -> check passed on layer 7 L7OKC -> check conditionally passed on layer 7, for example 404 with disable-on-404 L7TOUT -> layer 7 (HTTP/SMTP) timeout L7RSP -> layer 7 invalid response - protocol error L7STS -> layer 7 response error, for example HTTP 5xx Notice: If a check is currently running, the last known status will be reported, prefixed with "* ". e. g. "* L7OK". 37. check_code [...S]: layer5-7 code, if available 38. check_duration [...S]: time in ms took to finish last health check 39. hrsp_1xx [.FBS]: http responses with 1xx code 40. hrsp_2xx [.FBS]: http responses with 2xx code 41. hrsp_3xx [.FBS]: http responses with 3xx code 42. hrsp_4xx [.FBS]: http responses with 4xx code 43. hrsp_5xx [.FBS]: http responses with 5xx code 44. hrsp_other [.FBS]: http responses with other codes (protocol error) 45. hanafail [...S]: failed health checks details 46. req_rate [.F..]: HTTP requests per second over last elapsed second 47. req_rate_max [.F..]: max number of HTTP requests per second observed 48. req_tot [.FB.]: total number of HTTP requests received 49. cli_abrt [..BS]: number of data transfers aborted by the client 50. srv_abrt [..BS]: number of data transfers aborted by the server (inc. in eresp) 51. comp_in [.FB.]: number of HTTP response bytes fed to the compressor 52. comp_out [.FB.]: number of HTTP response bytes emitted by the compressor 53. comp_byp [.FB.]: number of bytes that bypassed the HTTP compressor (CPU/BW limit) 54. comp_rsp [.FB.]: number of HTTP responses that were compressed 55. lastsess [..BS]: number of seconds since last session assigned to server/backend 56. last_chk [...S]: last health check contents or textual error 57. last_agt [...S]: last agent check contents or textual error 58. qtime [..BS]: the average queue time in ms over the 1024 last requests 59. ctime [..BS]: the average connect time in ms over the 1024 last requests 60. rtime [..BS]: the average response time in ms over the 1024 last requests (0 for TCP) 61. ttime [..BS]: the average total session time in ms over the 1024 last requests 62. agent_status [...S]: status of last agent check, one of: UNK -> unknown INI -> initializing SOCKERR -> socket error L4OK -> check passed on layer 4, no upper layers testing enabled L4TOUT -> layer 1-4 timeout L4CON -> layer 1-4 connection problem, for example "Connection refused" (tcp rst) or "No route to host" (icmp) L7OK -> agent reported "up" L7STS -> agent reported "fail", "stop", or "down" 63. agent_code [...S]: numeric code reported by agent if any (unused for now) 64. agent_duration [...S]: time in ms taken to finish last check 65. check_desc [...S]: short human-readable description of check_status 66. agent_desc [...S]: short human-readable description of agent_status 67. check_rise [...S]: server's "rise" parameter used by checks 68. check_fall [...S]: server's "fall" parameter used by checks 69. check_health [...S]: server's health check value between 0 and rise+fall-1 70. agent_rise [...S]: agent's "rise" parameter, normally 1 71. agent_fall [...S]: agent's "fall" parameter, normally 1 72. agent_health [...S]: agent's health parameter, between 0 and rise+fall-1 73. addr [L..S]: address:port or "unix". IPv6 has brackets around the address. 74: cookie [..BS]: server's cookie value or backend's cookie name 75: mode [LFBS]: proxy mode (tcp, http, health, unknown) 76: algo [..B.]: load balancing algorithm 77: conn_rate [.F..]: number of connections over the last elapsed second 78: conn_rate_max [.F..]: highest known conn_rate 79: conn_tot [.F..]: cumulative number of connections 80: intercepted [.FB.]: cum. number of intercepted requests (monitor, stats) 81: dcon [LF..]: requests denied by "tcp-request connection" rules 82: dses [LF..]: requests denied by "tcp-request session" rules 83: wrew [LFBS]: cumulative number of failed header rewriting warnings 84: connect [..BS]: cumulative number of connection establishment attempts 85: reuse [..BS]: cumulative number of connection reuses 86: cache_lookups [.FB.]: cumulative number of cache lookups 87: cache_hits [.FB.]: cumulative number of cache hits 88: srv_icur [...S]: current number of idle connections available for reuse 89: src_ilim [...S]: limit on the number of available idle connections 90. qtime_max [..BS]: the maximum observed queue time in ms 91. ctime_max [..BS]: the maximum observed connect time in ms 92. rtime_max [..BS]: the maximum observed response time in ms (0 for TCP) 93. ttime_max [..BS]: the maximum observed total session time in ms 94. eint [LFBS]: cumulative number of internal errors 95. idle_conn_cur [...S]: current number of unsafe idle connections 96. safe_conn_cur [...S]: current number of safe idle connections 97. used_conn_cur [...S]: current number of connections in use 98. need_conn_est [...S]: estimated needed number of connections 99. uweight [..BS]: total user weight (backend), server user weight (server) For all other statistics domains, the presence or the order of the fields are not guaranteed. In this case, the header line should always be used to parse the CSV data. ``` ### 9.2. Typed output format ``` Both "[show info](#show%20info)" and "[show stat](#show%20stat)" support a mode where each output value comes with its type and sufficient information to know how the value is supposed to be aggregated between processes and how it evolves. In all cases, the output consists in having a single value per line with all the information split into fields delimited by colons (':'). The first column designates the object or metric being dumped. Its format is specific to the command producing this output and will not be described in this section. Usually it will consist in a series of identifiers and field names. The second column contains 3 characters respectively indicating the origin, the nature and the scope of the value being reported. The first character (the origin) indicates where the value was extracted from. Possible characters are : M The value is a metric. It is valid at one instant any may change depending on its nature . S The value is a status. It represents a discrete value which by definition cannot be aggregated. It may be the status of a server ("UP" or "DOWN"), the PID of the process, etc. K The value is a sorting key. It represents an identifier which may be used to group some values together because it is unique among its class. All internal identifiers are keys. Some names can be listed as keys if they are unique (eg: a frontend name is unique). In general keys come from the configuration, even though some of them may automatically be assigned. For most purposes keys may be considered as equivalent to configuration. C The value comes from the configuration. Certain configuration values make sense on the output, for example a concurrent connection limit or a cookie name. By definition these values are the same in all processes started from the same configuration file. P The value comes from the product itself. There are very few such values, most common use is to report the product name, version and release date. These elements are also the same between all processes. The second character (the nature) indicates the nature of the information carried by the field in order to let an aggregator decide on what operation to use to aggregate multiple values. Possible characters are : A The value represents an age since a last event. This is a bit different from the duration in that an age is automatically computed based on the current date. A typical example is how long ago did the last session happen on a server. Ages are generally aggregated by taking the minimum value and do not need to be stored. a The value represents an already averaged value. The average response times and server weights are of this nature. Averages can typically be averaged between processes. C The value represents a cumulative counter. Such measures perpetually increase until they wrap around. Some monitoring protocols need to tell the difference between a counter and a gauge to report a different type. In general counters may simply be summed since they represent events or volumes. Examples of metrics of this nature are connection counts or byte counts. D The value represents a duration for a status. There are a few usages of this, most of them include the time taken by the last health check and the time a server has spent down. Durations are generally not summed, most of the time the maximum will be retained to compute an SLA. G The value represents a gauge. It's a measure at one instant. The memory usage or the current number of active connections are of this nature. Metrics of this type are typically summed during aggregation. L The value represents a limit (generally a configured one). By nature, limits are harder to aggregate since they are specific to the point where they were retrieved. In certain situations they may be summed or be kept separate. M The value represents a maximum. In general it will apply to a gauge and keep the highest known value. An example of such a metric could be the maximum amount of concurrent connections that was encountered in the product's life time. To correctly aggregate maxima, you are supposed to output a range going from the maximum of all maxima and the sum of all of them. There is indeed no way to know if they were encountered simultaneously or not. m The value represents a minimum. In general it will apply to a gauge and keep the lowest known value. An example of such a metric could be the minimum amount of free memory pools that was encountered in the product's life time. To correctly aggregate minima, you are supposed to output a range going from the minimum of all minima and the sum of all of them. There is indeed no way to know if they were encountered simultaneously or not. N The value represents a name, so it is a string. It is used to report proxy names, server names and cookie names. Names have configuration or keys as their origin and are supposed to be the same among all processes. O The value represents a free text output. Outputs from various commands, returns from health checks, node descriptions are of such nature. R The value represents an event rate. It's a measure at one instant. It is quite similar to a gauge except that the recipient knows that this measure moves slowly and may decide not to keep all values. An example of such a metric is the measured amount of connections per second. Metrics of this type are typically summed during aggregation. T The value represents a date or time. A field emitting the current date would be of this type. The method to aggregate such information is left as an implementation choice. For now no field uses this type. The third character (the scope) indicates what extent the value reflects. Some elements may be per process while others may be per configuration or per system. The distinction is important to know whether or not a single value should be kept during aggregation or if values have to be aggregated. The following characters are currently supported : C The value is valid for a whole cluster of nodes, which is the set of nodes communicating over the peers protocol. An example could be the amount of entries present in a stick table that is replicated with other peers. At the moment no metric use this scope. P The value is valid only for the process reporting it. Most metrics use this scope. S The value is valid for the whole service, which is the set of processes started together from the same configuration file. All metrics originating from the configuration use this scope. Some other metrics may use it as well for some shared resources (eg: shared SSL cache statistics). s The value is valid for the whole system, such as the system's hostname, current date or resource usage. At the moment this scope is not used by any metric. Consumers of these information will generally have enough of these 3 characters to determine how to accurately report aggregated information across multiple processes. After this column, the third column indicates the type of the field, among "s32" (signed 32-bit integer), "s64" (signed 64-bit integer), "u32" (unsigned 32-bit integer), "u64" (unsigned 64-bit integer), "str" (string). It is important to know the type before parsing the value in order to properly read it. For example a string containing only digits is still a string an not an integer (eg: an error code extracted by a check). Then the fourth column is the value itself, encoded according to its type. Strings are dumped as-is immediately after the colon without any leading space. If a string contains a colon, it will appear normally. This means that the output should not be exclusively split around colons or some check outputs or server addresses might be truncated. ``` ### 9.3. Unix Socket commands ``` The stats socket is not enabled by default. In order to enable it, it is necessary to add one line in the global section of the haproxy configuration. A second line is recommended to set a larger timeout, always appreciated when issuing commands by hand : global stats socket /var/run/haproxy.sock mode 600 level admin stats timeout 2m It is also possible to add multiple instances of the stats socket by repeating the line, and make them listen to a TCP port instead of a UNIX socket. This is never done by default because this is dangerous, but can be handy in some situations : global stats socket /var/run/haproxy.sock mode 600 level admin stats socket [email protected]:9999 level admin stats timeout 2m To access the socket, an external utility such as "socat" is required. Socat is a swiss-army knife to connect anything to anything. We use it to connect terminals to the socket, or a couple of stdin/stdout pipes to it for scripts. The two main syntaxes we'll use are the following : # socat /var/run/haproxy.sock stdio # socat /var/run/haproxy.sock readline The first one is used with scripts. It is possible to send the output of a script to haproxy, and pass haproxy's output to another script. That's useful for retrieving counters or attack traces for example. The second one is only useful for issuing commands by hand. It has the benefit that the terminal is handled by the readline library which supports line editing and history, which is very convenient when issuing repeated commands (eg: watch a counter). The socket supports two operation modes : - interactive - non-interactive The non-interactive mode is the default when socat connects to the socket. In this mode, a single line may be sent. It is processed as a whole, responses are sent back, and the connection closes after the end of the response. This is the mode that scripts and monitoring tools use. It is possible to send multiple commands in this mode, they need to be delimited by a semi-colon (';'). For example : # echo "show info;show stat;show table" | socat /var/run/haproxy stdio If a command needs to use a semi-colon or a backslash (eg: in a value), it must be preceded by a backslash ('\'). The interactive mode displays a prompt ('>') and waits for commands to be entered on the line, then processes them, and displays the prompt again to wait for a new command. This mode is entered via the "prompt" command which must be sent on the first line in non-interactive mode. The mode is a flip switch, if "prompt" is sent in interactive mode, it is disabled and the connection closes after processing the last command of the same line. For this reason, when debugging by hand, it's quite common to start with the "prompt" command : # socat /var/run/haproxy readline prompt > show info ... > Since multiple commands may be issued at once, haproxy uses the empty line as a delimiter to mark an end of output for each command, and takes care of ensuring that no command can emit an empty line on output. A script can thus easily parse the output even when multiple commands were pipelined on a single line. Some commands may take an optional payload. To add one to a command, the first line needs to end with the "<<\n" pattern. The next lines will be treated as the payload and can contain as many lines as needed. To validate a command with a payload, it needs to end with an empty line. Limitations do exist: the length of the whole buffer passed to the CLI must not be greater than tune.bfsize and the pattern "<<" must not be glued to the last word of the line. When entering a paylod while in interactive mode, the prompt will change from "> " to "+ ". It is important to understand that when multiple haproxy processes are started on the same sockets, any process may pick up the request and will output its own stats. The list of commands currently supported on the stats socket is provided below. If an unknown command is sent, haproxy displays the usage message which reminds all supported commands. Some commands support a more complex syntax, generally it will explain what part of the command is invalid when this happens. Some commands require a higher level of privilege to work. If you do not have enough privilege, you will get an error "Permission denied". Please check the "level" option of the "bind" keyword lines in the configuration manual for more information. ``` **abort ssl ca-file** <cafile> ``` Abort and destroy a temporary CA file update transaction. See also "[set ssl ca-file](#set%20ssl%20ca-file)" and "[commit ssl ca-file](#commit%20ssl%20ca-file)". ``` **abort ssl cert** <filename> ``` Abort and destroy a temporary SSL certificate update transaction. See also "[set ssl cert](#set%20ssl%20cert)" and "[commit ssl cert](#commit%20ssl%20cert)". ``` **abort ssl crl-file** <crlfile> ``` Abort and destroy a temporary CRL file update transaction. See also "[set ssl crl-file](#set%20ssl%20crl-file)" and "[commit ssl crl-file](#commit%20ssl%20crl-file)". ``` **add acl** [@<ver>] <acl> <pattern> ``` Add an entry into the acl <acl>. <acl> is the #<id> or the <file> returned by "[show acl](#show%20acl)". This command does not verify if the entry already exists. Entries are added to the current version of the ACL, unless a specific version is specified with "@<ver>". This version number must have preliminary been allocated by "[prepare acl](#prepare%20acl)", and it will be comprised between the versions reported in "curr_ver" and "next_ver" on the output of "[show acl](#show%20acl)". Entries added with a specific version number will not match until a "commit acl" operation is performed on them. They may however be consulted using the "show acl @<ver>" command, and cleared using a "clear acl @<ver>" command. This command cannot be used if the reference <acl> is a file also used with a map. In this case, the "[add map](#add%20map)" command must be used instead. ``` **add map** [@<ver>] <map> <key> <value> **add map** [@<ver>] <map> <payload> ``` Add an entry into the map <map> to associate the value <value> to the key <key>. This command does not verify if the entry already exists. It is mainly used to fill a map after a "[clear](#clear)" or "[prepare](#prepare)" operation. Entries are added to the current version of the ACL, unless a specific version is specified with "@<ver>". This version number must have preliminary been allocated by "[prepare acl](#prepare%20acl)", and it will be comprised between the versions reported in "curr_ver" and "next_ver" on the output of "[show acl](#show%20acl)". Entries added with a specific version number will not match until a "commit map" operation is performed on them. They may however be consulted using the "show map @<ver>" command, and cleared using a "clear acl @<ver>" command. If the designated map is also used as an ACL, the ACL will only match the <key> part and will ignore the <value> part. Using the payload syntax it is possible to add multiple key/value pairs by entering them on separate lines. On each new line, the first word is the key and the rest of the line is considered to be the value which can even contains spaces. ``` Example: ``` # socat /tmp/sock1 - prompt > add map #-1 << + key1 value1 + key2 value2 with spaces + key3 value3 also with spaces + key4 value4 > ``` **add server** <backend>/<server> [args]\* ``` Instantiate a new server attached to the backend <backend>. The <server> name must not be already used in the backend. A special restriction is put on the backend which must used a dynamic load-balancing algorithm. A subset of keywords from the server config file statement can be used to configure the server behavior. Also note that no settings will be reused from an hypothetical 'default-server' statement in the same backend. Currently a dynamic server is statically initialized with the "none" init-addr method. This means that no resolution will be undertaken if a FQDN is specified as an address, even if the server creation will be validated. To support the reload operations, it is expected that the server created via the CLI is also manually inserted in the relevant haproxy configuration file. A dynamic server not present in the configuration won't be restored after a reload operation. A dynamic server may use the "track" keyword to follow the check status of another server from the configuration. However, it is not possible to track another dynamic server. This is to ensure that the tracking chain is kept consistent even in the case of dynamic servers deletion. Use the "check" keyword to enable health-check support. Note that the health-check is disabled by default and must be enabled independently from the server using the "[enable health](#enable%20health)" command. For agent checks, use the "agent-check" keyword and the "[enable agent](#enable%20agent)" command. Note that in this case the server may be activated via the agent depending on the status reported, without an explicit "[enable server](#enable%20server)" command. This also means that extra care is required when removing a dynamic server with agent check. The agent should be first deactivated via "[disable agent](#disable%20agent)" to be able to put the server in the required maintenance mode before removal. It may be possible to reach the fd limit when using a large number of dynamic servers. Please refer to the "u-limit" global keyword documentation in this case. Here is the list of the currently supported keywords : - agent-addr - agent-check - agent-inter - agent-port - agent-send - allow-0rtt - alpn - addr - backup - ca-file - check - check-alpn - check-proto - check-send-proxy - check-sni - check-ssl - check-via-socks4 - ciphers - ciphersuites - crl-file - crt - disabled - downinter - enabled - error-limit - fall - fastinter - force-sslv3/tlsv10/tlsv11/tlsv12/tlsv13 - id - inter - maxconn - maxqueue - minconn - no-ssl-reuse - no-sslv3/tlsv10/tlsv11/tlsv12/tlsv13 - no-tls-tickets - npn - observe - on-error - on-marked-down - on-marked-up - pool-low-conn - pool-max-conn - pool-purge-delay - port - proto - proxy-v2-options - rise - send-proxy - send-proxy-v2 - send-proxy-v2-ssl - send-proxy-v2-ssl-cn - slowstart - sni - source - ssl - ssl-max-ver - ssl-min-ver - tfo - tls-tickets - track - usesrc - verify - verifyhost - weight - ws Their syntax is similar to the server line from the configuration file, please refer to their individual documentation for details. ``` **add ssl ca-file** <cafile> <payload> Add a new certificate to a ca-file. This command is useful when you reached the buffer size limit on the CLI and want to add multiple certicates. Instead of doing a "[set](#set)" with all the certificates you are able to add each certificate individually. A "[set ssl ca-file](#set%20ssl%20ca-file)" will reset the ca-file. Example: ``` echo -e "set ssl ca-file cafile.pem <<\n$(cat rootCA.crt)\n" | \ socat /var/run/haproxy.stat - echo -e "add ssl ca-file cafile.pem <<\n$(cat intermediate1.crt)\n" | \ socat /var/run/haproxy.stat - echo -e "add ssl ca-file cafile.pem <<\n$(cat intermediate2.crt)\n" | \ socat /var/run/haproxy.stat - echo "commit ssl ca-file cafile.pem" | socat /var/run/haproxy.stat - ``` **add ssl crt-list** <crtlist> <certificate> **add ssl crt-list** <crtlist> <payload> ``` Add an certificate in a crt-list. It can also be used for directories since directories are now loaded the same way as the crt-lists. This command allow you to use a certificate name in parameter, to use SSL options or filters a crt-list line must sent as a payload instead. Only one crt-list line is supported in the payload. This command will load the certificate for every bind lines using the crt-list. To push a new certificate to HAProxy the commands "[new ssl cert](#new%20ssl%20cert)" and "[set ssl cert](#set%20ssl%20cert)" must be used. ``` Example: ``` $ echo "new ssl cert foobar.pem" | socat /tmp/sock1 - $ echo -e "set ssl cert foobar.pem <<\n$(cat foobar.pem)\n" | socat /tmp/sock1 - $ echo "commit ssl cert foobar.pem" | socat /tmp/sock1 - $ echo "add ssl crt-list certlist1 foobar.pem" | socat /tmp/sock1 - $ echo -e 'add ssl crt-list certlist1 <<\nfoobar.pem [allow-0rtt] foo.bar.com !test1.com\n' | socat /tmp/sock1 - ``` **clear counters** ``` Clear the max values of the statistics counters in each proxy (frontend & backend) and in each server. The accumulated counters are not affected. The internal activity counters reported by "[show activity](#show%20activity)" are also reset. This can be used to get clean counters after an incident, without having to restart nor to clear traffic counters. This command is restricted and can only be issued on sockets configured for levels "[operator](#operator)" or "admin". ``` **clear counters all** ``` Clear all statistics counters in each proxy (frontend & backend) and in each server. This has the same effect as restarting. This command is restricted and can only be issued on sockets configured for level "admin". ``` **clear acl** [@<ver>] <acl> ``` Remove all entries from the acl <acl>. <acl> is the #<id> or the <file> returned by "[show acl](#show%20acl)". Note that if the reference <acl> is a file and is shared with a map, this map will be also cleared. By default only the current version of the ACL is cleared (the one being matched against). However it is possible to specify another version using '@' followed by this version. ``` **clear map** [@<ver>] <map> ``` Remove all entries from the map <map>. <map> is the #<id> or the <file> returned by "[show map](#show%20map)". Note that if the reference <map> is a file and is shared with a acl, this acl will be also cleared. By default only the current version of the map is cleared (the one being matched against). However it is possible to specify another version using '@' followed by this version. ``` **clear table** <table> [ data.<type> <operator> <value> ] | [ key <key> ] ``` Remove entries from the stick-table <table>. This is typically used to unblock some users complaining they have been abusively denied access to a service, but this can also be used to clear some stickiness entries matching a server that is going to be replaced (see "show table" below for details). Note that sometimes, removal of an entry will be refused because it is currently tracked by a session. Retrying a few seconds later after the session ends is usual enough. In the case where no options arguments are given all entries will be removed. When the "data." form is used entries matching a filter applied using the stored data (see "stick-table" in [section 4.2](#4.2)) are removed. A stored data type must be specified in <type>, and this data type must be stored in the table otherwise an error is reported. The data is compared according to <operator> with the 64-bit integer <value>. Operators are the same as with the ACLs : - eq : match entries whose data is equal to this value - ne : match entries whose data is not equal to this value - le : match entries whose data is less than or equal to this value - ge : match entries whose data is greater than or equal to this value - lt : match entries whose data is less than this value - gt : match entries whose data is greater than this value When the key form is used the entry <key> is removed. The key must be of the same type as the table, which currently is limited to IPv4, IPv6, integer and string. ``` Example : ``` $ echo "show table http_proxy" | socat stdio /tmp/sock1 >>> # table: http\_proxy, type: ip, size:204800, used:2 >>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \ bytes_out_rate(60000)=187 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 $ echo "clear table http_proxy key 127.0.0.1" | socat stdio /tmp/sock1 $ echo "show table http_proxy" | socat stdio /tmp/sock1 >>> # table: http\_proxy, type: ip, size:204800, used:1 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 $ echo "clear table http_proxy data.gpc0 eq 1" | socat stdio /tmp/sock1 $ echo "show table http_proxy" | socat stdio /tmp/sock1 >>> # table: http\_proxy, type: ip, size:204800, used:1 ``` ``` commit acl @<ver> <acl> Commit all changes made to version <ver> of ACL <acl>, and deletes all past versions. <acl> is the #<id> or the <file> returned by "[show acl](#show%20acl)". The version number must be between "curr_ver"+1 and "next_ver" as reported in "[show acl](#show%20acl)". The contents to be committed to the ACL can be consulted with "show acl @<ver> <acl>" if desired. The specified version number has normally been created with the "[prepare acl](#prepare%20acl)" command. The replacement is atomic. It consists in atomically updating the current version to the specified version, which will instantly cause all entries in other versions to become invisible, and all entries in the new version to become visible. It is also possible to use this command to perform an atomic removal of all visible entries of an ACL by calling "[prepare acl](#prepare%20acl)" first then committing without adding any entries. This command cannot be used if the reference <acl> is a file also used as a map. In this case, the "commit map" command must be used instead. commit map @<ver> <map> Commit all changes made to version <ver> of map <map>, and deletes all past versions. <map> is the #<id> or the <file> returned by "[show map](#show%20map)". The version number must be between "curr_ver"+1 and "next_ver" as reported in "[show map](#show%20map)". The contents to be committed to the map can be consulted with "show map @<ver> <map>" if desired. The specified version number has normally been created with the "[prepare map](#prepare%20map)" command. The replacement is atomic. It consists in atomically updating the current version to the specified version, which will instantly cause all entries in other versions to become invisible, and all entries in the new version to become visible. It is also possible to use this command to perform an atomic removal of all visible entries of an map by calling "[prepare map](#prepare%20map)" first then committing without adding any entries. ``` **commit ssl ca-file** <cafile> ``` Commit a temporary SSL CA file update transaction. In the case of an existing CA file (in a "Used" state in "[show ssl ca-file](#show%20ssl%20ca-file)"), the new CA file tree entry is inserted in the CA file tree and every instance that used the CA file entry is rebuilt, along with the SSL contexts it needs. All the contexts previously used by the rebuilt instances are removed. Upon success, the previous CA file entry is removed from the tree. Upon failure, nothing is removed or deleted, and all the original SSL contexts are kept and used. Once the temporary transaction is committed, it is destroyed. In the case of a new CA file (after a "[new ssl ca-file](#new%20ssl%20ca-file)" and in a "Unused" state in "[show ssl ca-file](#show%20ssl%20ca-file)"), the CA file will be inserted in the CA file tree but it won't be used anywhere in HAProxy. To use it and generate SSL contexts that use it, you will need to add it to a crt-list with "add ssl crt-list". See also "[new ssl ca-file](#new%20ssl%20ca-file)", "[set ssl ca-file](#set%20ssl%20ca-file)", "[add ssl ca-file](#add%20ssl%20ca-file)", "[abort ssl ca-file](#abort%20ssl%20ca-file)" and "[add ssl crt-list](#add%20ssl%20crt-list)". ``` **commit ssl cert** <filename> ``` Commit a temporary SSL certificate update transaction. In the case of an existing certificate (in a "Used" state in "show ssl cert"), generate every SSL contextes and SNIs it need, insert them, and remove the previous ones. Replace in memory the previous SSL certificates everywhere the <filename> was used in the configuration. Upon failure it doesn't remove or insert anything. Once the temporary transaction is committed, it is destroyed. In the case of a new certificate (after a "[new ssl cert](#new%20ssl%20cert)" and in a "Unused" state in "[show ssl cert](#show%20ssl%20cert)"), the certificate will be committed in a certificate storage, but it won't be used anywhere in haproxy. To use it and generate its SNIs you will need to add it to a crt-list or a directory with "add ssl crt-list". See also "[new ssl cert](#new%20ssl%20cert)", "[set ssl cert](#set%20ssl%20cert)", "[abort ssl cert](#abort%20ssl%20cert)" and "[add ssl crt-list](#add%20ssl%20crt-list)". ``` **commit ssl crl-file** <crlfile> ``` Commit a temporary SSL CRL file update transaction. In the case of an existing CRL file (in a "Used" state in "show ssl crl-file"), the new CRL file entry is inserted in the CA file tree (which holds both the CA files and the CRL files) and every instance that used the CRL file entry is rebuilt, along with the SSL contexts it needs. All the contexts previously used by the rebuilt instances are removed. Upon success, the previous CRL file entry is removed from the tree. Upon failure, nothing is removed or deleted, and all the original SSL contexts are kept and used. Once the temporary transaction is committed, it is destroyed. In the case of a new CRL file (after a "[new ssl crl-file](#new%20ssl%20crl-file)" and in a "Unused" state in "[show ssl crl-file](#show%20ssl%20crl-file)"), the CRL file will be inserted in the CRL file tree but it won't be used anywhere in HAProxy. To use it and generate SSL contexts that use it, you will need to add it to a crt-list with "add ssl crt-list". See also "[new ssl crl-file](#new%20ssl%20crl-file)", "[set ssl crl-file](#set%20ssl%20crl-file)", "[abort ssl crl-file](#abort%20ssl%20crl-file)" and "[add ssl crt-list](#add%20ssl%20crt-list)". ``` **debug dev** <command> [args]\* ``` Call a developer-specific command. Only supported on a CLI connection running in expert mode (see "expert-mode on"). Such commands are extremely dangerous and not forgiving, any misuse may result in a crash of the process. They are intended for experts only, and must really not be used unless told to do so. Some of them are only available when haproxy is built with DEBUG_DEV defined because they may have security implications. All of these commands require admin privileges, and are purposely not documented to avoid encouraging their use by people who are not at ease with the source code. ``` **del acl** <acl> [<key>|#<ref>] ``` Delete all the acl entries from the acl <acl> corresponding to the key <key>. <acl> is the #<id> or the <file> returned by "[show acl](#show%20acl)". If the <ref> is used, this command delete only the listed reference. The reference can be found with listing the content of the acl. Note that if the reference <acl> is a file and is shared with a map, the entry will be also deleted in the map. ``` **del map** <map> [<key>|#<ref>] ``` Delete all the map entries from the map <map> corresponding to the key <key>. <map> is the #<id> or the <file> returned by "[show map](#show%20map)". If the <ref> is used, this command delete only the listed reference. The reference can be found with listing the content of the map. Note that if the reference <map> is a file and is shared with a acl, the entry will be also deleted in the map. ``` **del ssl ca-file** <cafile> ``` Delete a CA file tree entry from HAProxy. The CA file must be unused and removed from any crt-list. "[show ssl ca-file](#show%20ssl%20ca-file)" displays the status of the CA files. The deletion doesn't work with a certificate referenced directly with the "ca-file" or "ca-verify-file" directives in the configuration. ``` **del ssl cert** <certfile> ``` Delete a certificate store from HAProxy. The certificate must be unused and removed from any crt-list or directory. "[show ssl cert](#show%20ssl%20cert)" displays the status of the certificate. The deletion doesn't work with a certificate referenced directly with the "crt" directive in the configuration. ``` **del ssl crl-file** <crlfile> ``` Delete a CRL file tree entry from HAProxy. The CRL file must be unused and removed from any crt-list. "[show ssl crl-file](#show%20ssl%20crl-file)" displays the status of the CRL files. The deletion doesn't work with a certificate referenced directly with the "crl-file" directive in the configuration. ``` **del ssl crt-list** <filename> <certfile[:line]> ``` Delete an entry in a crt-list. This will delete every SNIs used for this entry in the frontends. If a certificate is used several time in a crt-list, you will need to provide which line you want to delete. To display the line numbers, use "show ssl crt-list -n <crtlist>". ``` **del server** <backend>/<server> ``` Remove a server attached to the backend <backend>. All servers are eligible, except servers which are referenced by other configuration elements. The server must be put in maintenance mode prior to its deletion. The operation is cancelled if the serveur still has active or idle connection or its connection queue is not empty. ``` **disable agent** <backend>/<server> ``` Mark the auxiliary agent check as temporarily stopped. In the case where an agent check is being run as a auxiliary check, due to the agent-check parameter of a server directive, new checks are only initialized when the agent is in the enabled. Thus, disable agent will prevent any new agent checks from begin initiated until the agent re-enabled using enable agent. When an agent is disabled the processing of an auxiliary agent check that was initiated while the agent was set as enabled is as follows: All results that would alter the weight, specifically "drain" or a weight returned by the agent, are ignored. The processing of agent check is otherwise unchanged. The motivation for this feature is to allow the weight changing effects of the agent checks to be paused to allow the weight of a server to be configured using set weight without being overridden by the agent. This command is restricted and can only be issued on sockets configured for level "admin". ``` **disable dynamic-cookie backend** <backend> ``` Disable the generation of dynamic cookies for the backend <backend> ``` **disable frontend** <frontend> ``` Mark the frontend as temporarily stopped. This corresponds to the mode which is used during a soft restart : the frontend releases the port but can be enabled again if needed. This should be used with care as some non-Linux OSes are unable to enable it back. This is intended to be used in environments where stopping a proxy is not even imaginable but a misconfigured proxy must be fixed. That way it's possible to release the port and bind it into another process to restore operations. The frontend will appear with status "STOP" on the stats page. The frontend may be specified either by its name or by its numeric ID, prefixed with a sharp ('#'). This command is restricted and can only be issued on sockets configured for level "admin". ``` **disable health** <backend>/<server> ``` Mark the primary health check as temporarily stopped. This will disable sending of health checks, and the last health check result will be ignored. The server will be in unchecked state and considered UP unless an auxiliary agent check forces it down. This command is restricted and can only be issued on sockets configured for level "admin". ``` **disable server** <backend>/<server> ``` Mark the server DOWN for maintenance. In this mode, no more checks will be performed on the server until it leaves maintenance. If the server is tracked by other servers, those servers will be set to DOWN during the maintenance. In the statistics page, a server DOWN for maintenance will appear with a "MAINT" status, its tracking servers with the "MAINT(via)" one. Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#'). This command is restricted and can only be issued on sockets configured for level "admin". ``` **enable agent** <backend>/<server> ``` Resume auxiliary agent check that was temporarily stopped. See "[disable agent](#disable%20agent)" for details of the effect of temporarily starting and stopping an auxiliary agent. This command is restricted and can only be issued on sockets configured for level "admin". ``` **enable dynamic-cookie backend** <backend> ``` Enable the generation of dynamic cookies for the backend <backend>. A secret key must also be provided. ``` **enable frontend** <frontend> ``` Resume a frontend which was temporarily stopped. It is possible that some of the listening ports won't be able to bind anymore (eg: if another process took them since the 'disable frontend' operation). If this happens, an error is displayed. Some operating systems might not be able to resume a frontend which was disabled. The frontend may be specified either by its name or by its numeric ID, prefixed with a sharp ('#'). This command is restricted and can only be issued on sockets configured for level "admin". ``` **enable health** <backend>/<server> ``` Resume a primary health check that was temporarily stopped. This will enable sending of health checks again. Please see "[disable health](#disable%20health)" for details. This command is restricted and can only be issued on sockets configured for level "admin". ``` **enable server** <backend>/<server> ``` If the server was previously marked as DOWN for maintenance, this marks the server UP and checks are re-enabled. Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#'). This command is restricted and can only be issued on sockets configured for level "admin". ``` **experimental-mode** [on|off] ``` Without options, this indicates whether the experimental mode is enabled or disabled on the current connection. When passed "on", it turns the experimental mode on for the current CLI connection only. With "off" it turns it off. The experimental mode is used to access to extra features still in development. These features are currently not stable and should be used with care. They may be subject to breaking changes across versions. When used from the master CLI, this command shouldn't be prefixed, as it will set the mode for any worker when connecting to its CLI. ``` Example: ``` echo "@1; experimental-mode on; <experimental_cmd>..." | socat /var/run/haproxy.master - echo "experimental-mode on; @1 <experimental_cmd>..." | socat /var/run/haproxy.master - ``` **expert-mode** [on|off] ``` This command is similar to experimental-mode but is used to toggle the expert mode. The expert mode enables displaying of expert commands that can be extremely dangerous for the process and which may occasionally help developers collect important information about complex bugs. Any misuse of these features will likely lead to a process crash. Do not use this option without being invited to do so. Note that this command is purposely not listed in the help message. This command is only accessible in admin level. Changing to another level automatically resets the expert mode. When used from the master CLI, this command shouldn't be prefixed, as it will set the mode for any worker when connecting to its CLI. ``` Example: ``` echo "@1; expert-mode on; debug dev exit 1" | socat /var/run/haproxy.master - echo "expert-mode on; @1 debug dev exit 1" | socat /var/run/haproxy.master - ``` **get map** <map> <value> **get acl** <acl> <value> ``` Lookup the value <value> in the map <map> or in the ACL <acl>. <map> or <acl> are the #<id> or the <file> returned by "[show map](#show%20map)" or "[show acl](#show%20acl)". This command returns all the matching patterns associated with this map. This is useful for debugging maps and ACLs. The output format is composed by one line par matching type. Each line is composed by space-delimited series of words. The first two words are: <match method>: The match method applied. It can be "found", "bool", "int", "ip", "bin", "len", "str", "beg", "sub", "dir", "dom", "end" or "reg". <match result>: The result. Can be "match" or "no-match". The following words are returned only if the pattern matches an entry. <index type>: "tree" or "list". The internal lookup algorithm. <case>: "case-insensitive" or "case-sensitive". The interpretation of the case. <entry matched>: match="<entry>". Return the matched pattern. It is useful with regular expressions. The two last word are used to show the returned value and its type. With the "acl" case, the pattern doesn't exist. return=nothing: No return because there are no "map". return="<value>": The value returned in the string format. return=cannot-display: The value cannot be converted as string. type="<type>": The type of the returned sample. ``` **get var** <name> ``` Show the existence, type and contents of the process-wide variable 'name'. Only process-wide variables are readable, so the name must begin with 'proc.' otherwise no variable will be found. This command requires levels "[operator](#operator)" or "admin". ``` **get weight** <backend>/<server> ``` Report the current weight and the initial weight of server <server> in backend <backend> or an error if either doesn't exist. The initial weight is the one that appears in the configuration file. Both are normally equal unless the current weight has been changed. Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#'). ``` **help** [<command>] ``` Print the list of known keywords and their basic usage, or commands matching the requested one. The same help screen is also displayed for unknown commands. ``` **httpclient** <method> <URI> ``` Launch an HTTP client request and print the response on the CLI. Only supported on a CLI connection running in expert mode (see "expert-mode on"). It's only meant for debugging. The httpclient is able to resolve a server name in the URL using the "default" resolvers section, which is populated with the DNS servers of your /etc/resolv.conf by default. However it won't be able to resolve an host from /etc/hosts if you don't use a local dns daemon which can resolve those. ``` **new ssl ca-file** <cafile> ``` Create a new empty CA file tree entry to be filled with a set of CA certificates and added to a crt-list. This command should be used in combination with "[set ssl ca-file](#set%20ssl%20ca-file)", "[add ssl ca-file](#add%20ssl%20ca-file)" and "[add ssl crt-list](#add%20ssl%20crt-list)". ``` **new ssl cert** <filename> ``` Create a new empty SSL certificate store to be filled with a certificate and added to a directory or a crt-list. This command should be used in combination with "[set ssl cert](#set%20ssl%20cert)" and "[add ssl crt-list](#add%20ssl%20crt-list)". ``` **new ssl crl-file** <crlfile> ``` Create a new empty CRL file tree entry to be filled with a set of CRLs and added to a crt-list. This command should be used in combination with "set ssl crl-file" and "[add ssl crt-list](#add%20ssl%20crt-list)". ``` **prepare acl** <acl> ``` Allocate a new version number in ACL <acl> for atomic replacement. <acl> is the #<id> or the <file> returned by "[show acl](#show%20acl)". The new version number is shown in response after "New version created:". This number will then be usable to prepare additions of new entries into the ACL which will then atomically replace the current ones once committed. It is reported as "next_ver" in "[show acl](#show%20acl)". There is no impact of allocating new versions, as unused versions will automatically be removed once a more recent version is committed. Version numbers are unsigned 32-bit values which wrap at the end, so care must be taken when comparing them in an external program. This command cannot be used if the reference <acl> is a file also used as a map. In this case, the "[prepare map](#prepare%20map)" command must be used instead. ``` **prepare map** <map> ``` Allocate a new version number in map <map> for atomic replacement. <map> is the #<id> or the <file> returned by "[show map](#show%20map)". The new version number is shown in response after "New version created:". This number will then be usable to prepare additions of new entries into the map which will then atomically replace the current ones once committed. It is reported as "next_ver" in "[show map](#show%20map)". There is no impact of allocating new versions, as unused versions will automatically be removed once a more recent version is committed. Version numbers are unsigned 32-bit values which wrap at the end, so care must be taken when comparing them in an external program. ``` **prompt** ``` Toggle the prompt at the beginning of the line and enter or leave interactive mode. In interactive mode, the connection is not closed after a command completes. Instead, the prompt will appear again, indicating the user that the interpreter is waiting for a new command. The prompt consists in a right angle bracket followed by a space "> ". This mode is particularly convenient when one wants to periodically check information such as stats or errors. It is also a good idea to enter interactive mode before issuing a "[help](#help)" command. ``` **quit** ``` Close the connection when in interactive mode. ``` **set anon** [on|off] [<key>] ``` This command enables or disables the "anonymized mode" for the current CLI session, which replaces certain fields considered sensitive or confidential in command outputs with hashes that preserve sufficient consistency between elements to help developers identify relations between elements when trying to spot bugs, but a low enough bit count (24) to make them non-reversible due to the high number of possible matches. When turned on, if no key is specified, the global key will be used (either specified in the configuration file by "anonkey" or set via the CLI command "[set anon global-key](#set%20anon%20global-key)"). If no such key was set, a random one will be generated. Otherwise it's possible to specify the 32-bit key to be used for the current session, for example, to reuse the key that was used in a previous dump to help compare outputs. Developers will never need this key and it's recommended never to share it as it could allow to confirm/infirm some guesses about what certain hashes could be hiding. ``` **set dynamic-cookie-key backend** <backend> <value> ``` Modify the secret key used to generate the dynamic persistent cookies. This will break the existing sessions. ``` **set anon global-key** <key> ``` This sets the global anonymizing key to <key>, which must be a 32-bit integer between 0 and 4294967295 (0 disables the global key). This command requires admin privilege. ``` **set map** <map> [<key>|#<ref>] <value> ``` Modify the value corresponding to each key <key> in a map <map>. <map> is the #<id> or <file> returned by "[show map](#show%20map)". If the <ref> is used in place of <key>, only the entry pointed by <ref> is changed. The new value is <value>. ``` **set maxconn frontend** <frontend> <value> ``` Dynamically change the specified frontend's maxconn setting. Any positive value is allowed including zero, but setting values larger than the global maxconn does not make much sense. If the limit is increased and connections were pending, they will immediately be accepted. If it is lowered to a value below the current number of connections, new connections acceptation will be delayed until the threshold is reached. The frontend might be specified by either its name or its numeric ID prefixed with a sharp ('#'). ``` **set maxconn server** <backend/server> <value> ``` Dynamically change the specified server's maxconn setting. Any positive value is allowed including zero, but setting values larger than the global maxconn does not make much sense. ``` **set maxconn global** <maxconn> ``` Dynamically change the global maxconn setting within the range defined by the initial global maxconn setting. If it is increased and connections were pending, they will immediately be accepted. If it is lowered to a value below the current number of connections, new connections acceptation will be delayed until the threshold is reached. A value of zero restores the initial setting. ``` **set profiling** { tasks | memory } { auto | on | off } ``` Enables or disables CPU or memory profiling for the indicated subsystem. This is equivalent to setting or clearing the "profiling" settings in the "global" section of the configuration file. Please also see "[show profiling](#show%20profiling)". Note that manually setting the tasks profiling to "on" automatically resets the scheduler statistics, thus allows to check activity over a given interval. The memory profiling is limited to certain operating systems (known to work on the linux-glibc target), and requires USE_MEMORY_PROFILING to be set at compile time. ``` **set rate-limit connections global** <value> ``` Change the process-wide connection rate limit, which is set by the global 'maxconnrate' setting. A value of zero disables the limitation. This limit applies to all frontends and the change has an immediate effect. The value is passed in number of connections per second. ``` **set rate-limit http-compression global** <value> ``` Change the maximum input compression rate, which is set by the global 'maxcomprate' setting. A value of zero disables the limitation. The value is passed in number of kilobytes per second. The value is available in the "show info" on the line "CompressBpsRateLim" in bytes. ``` **set rate-limit sessions global** <value> ``` Change the process-wide session rate limit, which is set by the global 'maxsessrate' setting. A value of zero disables the limitation. This limit applies to all frontends and the change has an immediate effect. The value is passed in number of sessions per second. ``` **set rate-limit ssl-sessions global** <value> ``` Change the process-wide SSL session rate limit, which is set by the global 'maxsslrate' setting. A value of zero disables the limitation. This limit applies to all frontends and the change has an immediate effect. The value is passed in number of sessions per second sent to the SSL stack. It applies before the handshake in order to protect the stack against handshake abuses. ``` **set server** <backend>/<server> addr <ip4 or ip6 address> [port <port>] ``` Replace the current IP address of a server by the one provided. Optionally, the port can be changed using the 'port' parameter. Note that changing the port also support switching from/to port mapping (notation with +X or -Y), only if a port is configured for the health check. ``` **set server** <backend>/<server> agent [ up | down ] ``` Force a server's agent to a new state. This can be useful to immediately switch a server's state regardless of some slow agent checks for example. Note that the change is propagated to tracking servers if any. ``` **set server** <backend>/<server> agent-addr <addr> [port <port>] ``` Change addr for servers agent checks. Allows to migrate agent-checks to another address at runtime. You can specify both IP and hostname, it will be resolved. Optionally, change the port agent. ``` **set server** <backend>/<server> agent-port <port> ``` Change the port used for agent checks. ``` **set server** <backend>/<server> agent-send <value> ``` Change agent string sent to agent check target. Allows to update string while changing server address to keep those two matching. ``` **set server** <backend>/<server> health [ up | stopping | down ] ``` Force a server's health to a new state. This can be useful to immediately switch a server's state regardless of some slow health checks for example. Note that the change is propagated to tracking servers if any. ``` **set server** <backend>/<server> check-addr <ip4 | ip6> [port <port>] ``` Change the IP address used for server health checks. Optionally, change the port used for server health checks. ``` **set server** <backend>/<server> check-port <port> ``` Change the port used for health checking to <port> ``` **set server** <backend>/<server> state [ ready | drain | maint ] ``` Force a server's administrative state to a new state. This can be useful to disable load balancing and/or any traffic to a server. Setting the state to "ready" puts the server in normal mode, and the command is the equivalent of the "[enable server](#enable%20server)" command. Setting the state to "maint" disables any traffic to the server as well as any health checks. This is the equivalent of the "[disable server](#disable%20server)" command. Setting the mode to "drain" only removes the server from load balancing but still allows it to be checked and to accept new persistent connections. Changes are propagated to tracking servers if any. ``` **set server** <backend>/<server> weight <weight>[%] ``` Change a server's weight to the value passed in argument. This is the exact equivalent of the "[set weight](#set%20weight)" command below. ``` **set server** <backend>/<server> fqdn <FQDN> ``` Change a server's FQDN to the value passed in argument. This requires the internal run-time DNS resolver to be configured and enabled for this server. ``` **set server** <backend>/<server> ssl [ on | off ] (deprecated) ``` This option configures SSL ciphering on outgoing connections to the server. When switch off, all traffic becomes plain text; health check path is not changed. This command is deprecated, create a new server dynamically with or without SSL instead, using the "[add server](#add%20server)" command. ``` **set severity-output** [ none | number | string ] ``` Change the severity output format of the stats socket connected to for the duration of the current session. ``` **set ssl ca-file** <cafile> <payload> ``` this command is part of a transaction system, the "[commit ssl ca-file](#commit%20ssl%20ca-file)" and "[abort ssl ca-file](#abort%20ssl%20ca-file)" commands could be required. if there is no on-going transaction, it will create a ca file tree entry into which the certificates contained in the payload will be stored. the ca file entry will not be stored in the ca file tree and will only be kept in a temporary transaction. if a transaction with the same filename already exists, the previous ca file entry will be deleted and replaced by the new one. once the modifications are done, you have to commit the transaction through a "[commit ssl ca-file](#commit%20ssl%20ca-file)" call. If you want to add multiple certificates separately, you can use the "[add ssl ca-file](#add%20ssl%20ca-file)" command ``` Example: ``` echo -e "set ssl ca-file cafile.pem <<\n$(cat rootCA.crt)\n" | \ socat /var/run/haproxy.stat - echo "commit ssl ca-file cafile.pem" | socat /var/run/haproxy.stat - ``` **set ssl cert** <filename> <payload> ``` This command is part of a transaction system, the "[commit ssl cert](#commit%20ssl%20cert)" and "[abort ssl cert](#abort%20ssl%20cert)" commands could be required. This whole transaction system works on any certificate displayed by the "[show ssl cert](#show%20ssl%20cert)" command, so on any frontend or backend certificate. If there is no on-going transaction, it will duplicate the certificate <filename> in memory to a temporary transaction, then update this transaction with the PEM file in the payload. If a transaction exists with the same filename, it will update this transaction. It's also possible to update the files linked to a certificate (.issuer, .sctl, .oscp etc.) Once the modification are done, you have to "[commit ssl cert](#commit%20ssl%20cert)" the transaction. Injection of files over the CLI must be done with caution since an empty line is used to notify the end of the payload. It is recommended to inject a PEM file which has been sanitized. A simple method would be to remove every empty line and only leave what are in the PEM sections. It could be achieved with a sed command. ``` Example: ``` # With some simple sanitizing echo -e "set ssl cert localhost.pem <<\n$(sed -n '/^$/d;/-BEGIN/,/-END/p' 127.0.0.1.pem)\n" | \ socat /var/run/haproxy.stat - # Complete example with commit echo -e "set ssl cert localhost.pem <<\n$(cat 127.0.0.1.pem)\n" | \ socat /var/run/haproxy.stat - echo -e \ "set ssl cert localhost.pem.issuer <<\n $(cat 127.0.0.1.pem.issuer)\n" | \ socat /var/run/haproxy.stat - echo -e \ "set ssl cert localhost.pem.ocsp <<\n$(base64 -w 1000 127.0.0.1.pem.ocsp)\n" | \ socat /var/run/haproxy.stat - echo "commit ssl cert localhost.pem" | socat /var/run/haproxy.stat - ``` **set ssl crl-file** <crlfile> <payload> ``` This command is part of a transaction system, the "[commit ssl crl-file](#commit%20ssl%20crl-file)" and "[abort ssl crl-file](#abort%20ssl%20crl-file)" commands could be required. If there is no on-going transaction, it will create a CRL file tree entry into which the Revocation Lists contained in the payload will be stored. The CRL file entry will not be stored in the CRL file tree and will only be kept in a temporary transaction. If a transaction with the same filename already exists, the previous CRL file entry will be deleted and replaced by the new one. Once the modifications are done, you have to commit the transaction through a "[commit ssl crl-file](#commit%20ssl%20crl-file)" call. ``` Example: ``` echo -e "set ssl crl-file crlfile.pem <<\n$(cat rootCRL.pem)\n" | \ socat /var/run/haproxy.stat - echo "commit ssl crl-file crlfile.pem" | socat /var/run/haproxy.stat - ``` **set ssl ocsp-response** <response | payload> ``` This command is used to update an OCSP Response for a certificate (see "crt" on "bind" lines). Same controls are performed as during the initial loading of the response. The <response> must be passed as a base64 encoded string of the DER encoded response from the OCSP server. This command is not supported with BoringSSL. ``` Example: ``` openssl ocsp -issuer issuer.pem -cert server.pem \ -host ocsp.issuer.com:80 -respout resp.der echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \ socat stdio /var/run/haproxy.stat using the payload syntax: echo -e "set ssl ocsp-response <<\n$(base64 resp.der)\n" | \ socat stdio /var/run/haproxy.stat ``` **set ssl tls-key** <id> <tlskey> ``` Set the next TLS key for the <id> listener to <tlskey>. This key becomes the ultimate key, while the penultimate one is used for encryption (others just decrypt). The oldest TLS key present is overwritten. <id> is either a numeric #<id> or <file> returned by "[show tls-keys](#show%20tls-keys)". <tlskey> is a base64 encoded 48 or 80 bits TLS ticket key (ex. openssl rand 80 | openssl base64 -A). ``` **set table** <table> key <key> [data.<data\_type> <value>]\* ``` Create or update a stick-table entry in the table. If the key is not present, an entry is inserted. See stick-table in [section 4.2](#4.2) to find all possible values for <data_type>. The most likely use consists in dynamically entering entries for source IP addresses, with a flag in gpc0 to dynamically block an IP address or affect its quality of service. It is possible to pass multiple data_types in a single call. ``` **set timeout cli** <delay> ``` Change the CLI interface timeout for current connection. This can be useful during long debugging sessions where the user needs to constantly inspect some indicators without being disconnected. The delay is passed in seconds. ``` **set var** <name> <expression> **set var** <name> expr <expression> **set var** <name> fmt <format> ``` Allows to set or overwrite the process-wide variable 'name' with the result of expression <expression> or format string <format>. Only process-wide variables may be used, so the name must begin with 'proc.' otherwise no variable will be set. The <expression> and <format> may only involve "internal" sample fetch keywords and converters even though the most likely useful ones will be str('something'), int(), simple strings or references to other variables. Note that the command line parser doesn't know about quotes, so any space in the expression must be preceded by a backslash. This command requires levels "[operator](#operator)" or "admin". This command is only supported on a CLI connection running in experimental mode (see "experimental-mode on"). ``` **set weight** <backend>/<server> <weight>[%] ``` Change a server's weight to the value passed in argument. If the value ends with the '%' sign, then the new weight will be relative to the initially configured weight. Absolute weights are permitted between 0 and 256. Relative weights must be positive with the resulting absolute weight is capped at 256. Servers which are part of a farm running a static load-balancing algorithm have stricter limitations because the weight cannot change once set. Thus for these servers, the only accepted values are 0 and 100% (or 0 and the initial weight). Changes take effect immediately, though certain LB algorithms require a certain amount of requests to consider changes. A typical usage of this command is to disable a server during an update by setting its weight to zero, then to enable it again after the update by setting it back to 100%. This command is restricted and can only be issued on sockets configured for level "admin". Both the backend and the server may be specified either by their name or by their numeric ID, prefixed with a sharp ('#'). ``` **show acl** [[@<ver>] <acl>] ``` Dump info about acl converters. Without argument, the list of all available acls is returned. If a <acl> is specified, its contents are dumped. <acl> is the #<id> or <file>. By default the current version of the ACL is shown (the version currently being matched against and reported as 'curr_ver' in the ACL list). It is possible to instead dump other versions by prepending '@<ver>' before the ACL's identifier. The version works as a filter and non-existing versions will simply report no result. The dump format is the same as for the maps even for the sample values. The data returned are not a list of available ACL, but are the list of all patterns composing any ACL. Many of these patterns can be shared with maps. The 'entry_cnt' value represents the count of all the ACL entries, not just the active ones, which means that it also includes entries currently being added. ``` **show anon** ``` Display the current state of the anonymized mode (enabled or disabled) and the current session's key. ``` **show backend** ``` Dump the list of backends available in the running process ``` **show cli level** ``` Display the CLI level of the current CLI session. The result could be 'admin', 'operator' or 'user'. See also the 'operator' and 'user' commands. ``` Example : ``` $ socat /tmp/sock1 readline prompt > operator > show cli level operator > user > show cli level user > operator Permission denied ``` **operator** ``` Decrease the CLI level of the current CLI session to operator. It can't be increased. It also drops expert and experimental mode. See also "show cli level". ``` **user** ``` Decrease the CLI level of the current CLI session to user. It can't be increased. It also drops expert and experimental mode. See also "show cli level". ``` **show activity** [-1 | 0 | thread\_num] ``` Reports some counters about internal events that will help developers and more generally people who know haproxy well enough to narrow down the causes of reports of abnormal behaviours. A typical example would be a properly running process never sleeping and eating 100% of the CPU. The output fields will be made of one line per metric, and per-thread counters on the same line. These counters are 32-bit and will wrap during the process's life, which is not a problem since calls to this command will typically be performed twice. The fields are purposely not documented so that their exact meaning is verified in the code where the counters are fed. These values are also reset by the "[clear counters](#clear%20counters)" command. On multi-threaded deployments, the first column will indicate the total (or average depending on the nature of the metric) for all threads, and the list of all threads' values will be represented between square brackets in the thread order. Optionally the thread number to be dumped may be specified in argument. The special value "0" will report the aggregated value (first column), and "-1", which is the default, will display all the columns. Note that just like in single-threaded mode, there will be no brackets when a single column is requested. ``` **show cli sockets** ``` List CLI sockets. The output format is composed of 3 fields separated by spaces. The first field is the socket address, it can be a unix socket, a ipv4 address:port couple or a ipv6 one. Socket of other types won't be dump. The second field describe the level of the socket: 'admin', 'user' or 'operator'. The last field list the processes on which the socket is bound, separated by commas, it can be numbers or 'all'. ``` Example : ``` $ echo 'show cli sockets' | socat stdio /tmp/sock1 # socket lvl processes /tmp/sock1 admin all 127.0.0.1:9999 user 2,3,4 127.0.0.2:9969 user 2 [::1]:9999 operator 2 ``` **show cache** ``` List the configured caches and the objects stored in each cache tree. $ echo 'show cache' | socat stdio /tmp/sock1 0x7f6ac6c5b03a: foobar (shctx:0x7f6ac6c5b000, available blocks:3918) 1 2 3 4 1. pointer to the cache structure 2. cache name 3. pointer to the mmap area (shctx) 4. number of blocks available for reuse in the shctx 0x7f6ac6c5b4cc hash:286881868 vary:0x0011223344556677 size:39114 (39 blocks), refcount:9, expire:237 1 2 3 4 5 6 7 1. pointer to the cache entry 2. first 32 bits of the hash 3. secondary hash of the entry in case of vary 4. size of the object in bytes 5. number of blocks used for the object 6. number of transactions using the entry 7. expiration time, can be negative if already expired ``` **show env** [<name>] ``` Dump one or all environment variables known by the process. Without any argument, all variables are dumped. With an argument, only the specified variable is dumped if it exists. Otherwise "Variable not found" is emitted. Variables are dumped in the same format as they are stored or returned by the "env" utility, that is, "<name>=<value>". This can be handy when debugging certain configuration files making heavy use of environment variables to ensure that they contain the expected values. This command is restricted and can only be issued on sockets configured for levels "[operator](#operator)" or "admin". ``` **show errors** [<iid>|<proxy>] [request|response] ``` Dump last known request and response errors collected by frontends and backends. If <iid> is specified, the limit the dump to errors concerning either frontend or backend whose ID is <iid>. Proxy ID "-1" will cause all instances to be dumped. If a proxy name is specified instead, its ID will be used as the filter. If "request" or "response" is added after the proxy name or ID, only request or response errors will be dumped. This command is restricted and can only be issued on sockets configured for levels "[operator](#operator)" or "admin". The errors which may be collected are the last request and response errors caused by protocol violations, often due to invalid characters in header names. The report precisely indicates what exact character violated the protocol. Other important information such as the exact date the error was detected, frontend and backend names, the server name (when known), the internal session ID and the source address which has initiated the session are reported too. All characters are returned, and non-printable characters are encoded. The most common ones (\t = 9, \n = 10, \r = 13 and \e = 27) are encoded as one letter following a backslash. The backslash itself is encoded as '\\' to avoid confusion. Other non-printable characters are encoded '\xNN' where NN is the two-digits hexadecimal representation of the character's ASCII code. Lines are prefixed with the position of their first character, starting at 0 for the beginning of the buffer. At most one input line is printed per line, and large lines will be broken into multiple consecutive output lines so that the output never goes beyond 79 characters wide. It is easy to detect if a line was broken, because it will not end with '\n' and the next line's offset will be followed by a '+' sign, indicating it is a continuation of previous line. ``` Example : ``` $ echo "show errors -1 response" | socat stdio /tmp/sock1 >>> [04/Mar/2009:15:46:56.081] backend http-in (#2) : invalid response src 127.0.0.1, session #54, frontend fe-eth0 (#1), server s2 (#1) response length 213 bytes, error at position 23: 00000 HTTP/1.0 200 OK\r\n 00017 header/bizarre:blah\r\n 00038 Location: blah\r\n 00054 Long-line: this is a very long line which should b 00104+ e broken into multiple lines on the output buffer, 00154+ otherwise it would be too large to print in a ter 00204+ minal\r\n 00211 \r\n In the example above, we see that the backend "http-in" which has internal ID 2 has blocked an invalid response from its server s2 which has internal ID 1. The request was on session 54 initiated by source 127.0.0.1 and received by frontend fe-eth0 whose ID is 1. The total response length was 213 bytes when the error was detected, and the error was at byte 23. This is the slash ('/') in header name "header/bizarre", which is not a valid HTTP character for a header name. ``` **show events** [<sink>] [-w] [-n] ``` With no option, this lists all known event sinks and their types. With an option, it will dump all available events in the designated sink if it is of type buffer. If option "-w" is passed after the sink name, then once the end of the buffer is reached, the command will wait for new events and display them. It is possible to stop the operation by entering any input (which will be discarded) or by closing the session. Finally, option "-n" is used to directly seek to the end of the buffer, which is often convenient when combined with "-w" to only report new events. For convenience, "-wn" or "-nw" may be used to enable both options at once. ``` **show fd** [<fd>] ``` Dump the list of either all open file descriptors or just the one number <fd> if specified. This is only aimed at developers who need to observe internal states in order to debug complex issues such as abnormal CPU usages. One fd is reported per lines, and for each of them, its state in the poller using upper case letters for enabled flags and lower case for disabled flags, using "P" for "polled", "R" for "ready", "A" for "active", the events status using "H" for "hangup", "E" for "error", "O" for "output", "P" for "priority" and "I" for "input", a few other flags like "N" for "[new](#new)" (just added into the fd cache), "U" for "updated" (received an update in the fd cache), "L" for "linger_risk", "C" for "cloned", then the cached entry position, the pointer to the internal owner, the pointer to the I/O callback and its name when known. When the owner is a connection, the connection flags, and the target are reported (frontend, proxy or server). When the owner is a listener, the listener's state and its frontend are reported. There is no point in using this command without a good knowledge of the internals. It's worth noting that the output format may evolve over time so this output must not be parsed by tools designed to be durable. Some internal structure states may look suspicious to the function listing them, in this case the output line will be suffixed with an exclamation mark ('!'). This may help find a starting point when trying to diagnose an incident. ``` **show info** [typed|json] [desc] [float] ``` Dump info about haproxy status on current process. If "typed" is passed as an optional argument, field numbers, names and types are emitted as well so that external monitoring products can easily retrieve, possibly aggregate, then report information found in fields they don't know. Each field is dumped on its own line. If "json" is passed as an optional argument then information provided by "typed" output is provided in JSON format as a list of JSON objects. By default, the format contains only two columns delimited by a colon (':'). The left one is the field name and the right one is the value. It is very important to note that in typed output format, the dump for a single object is contiguous so that there is no need for a consumer to store everything at once. If "float" is passed as an optional argument, some fields usually emitted as integers may switch to floats for higher accuracy. It is purposely unspecified which ones are concerned as this might evolve over time. Using this option implies that the consumer is able to process floats. The output format used is sprintf("%f"). When using the typed output format, each line is made of 4 columns delimited by colons (':'). The first column is a dot-delimited series of 3 elements. The first element is the numeric position of the field in the list (starting at zero). This position shall not change over time, but holes are to be expected, depending on build options or if some fields are deleted in the future. The second element is the field name as it appears in the default "[show info](#show%20info)" output. The third element is the relative process number starting at 1. The rest of the line starting after the first colon follows the "typed output format" described in the section above. In short, the second column (after the first ':') indicates the origin, nature and scope of the variable. The third column indicates the type of the field, among "s32", "s64", "u32", "u64" and "str". Then the fourth column is the value itself, which the consumer knows how to parse thanks to column 3 and how to process thanks to column 2. Thus the overall line format in typed mode is : <field_pos>.<field_name>.<process_num>:<tags>:<type>:<value> When "desc" is appended to the command, one extra colon followed by a quoted string is appended with a description for the metric. At the time of writing, this is only supported for the "typed" and default output formats. ``` Example : ``` > show info Name: HAProxy Version: 1.7-dev1-de52ea-146 Release_date: 2016/03/11 Nbproc: 1 Process_num: 1 Pid: 28105 Uptime: 0d 0h00m04s Uptime_sec: 4 Memmax_MB: 0 PoolAlloc_MB: 0 PoolUsed_MB: 0 PoolFailed: 0 (...) > show info typed 0.Name.1:POS:str:HAProxy 1.Version.1:POS:str:1.7-dev1-de52ea-146 2.Release_date.1:POS:str:2016/03/11 3.Nbproc.1:CGS:u32:1 4.Process_num.1:KGP:u32:1 5.Pid.1:SGP:u32:28105 6.Uptime.1:MDP:str:0d 0h00m08s 7.Uptime_sec.1:MDP:u32:8 8.Memmax_MB.1:CLP:u32:0 9.PoolAlloc_MB.1:MGP:u32:0 10.PoolUsed_MB.1:MGP:u32:0 11.PoolFailed.1:MCP:u32:0 (...) ``` ``` In the typed format, the presence of the process ID at the end of the first column makes it very easy to visually aggregate outputs from multiple processes. ``` Example : ``` $ ( echo show info typed | socat /var/run/haproxy.sock1 ; \ echo show info typed | socat /var/run/haproxy.sock2 ) | \ sort -t . -k 1,1n -k 2,2 -k 3,3n 0.Name.1:POS:str:HAProxy 0.Name.2:POS:str:HAProxy 1.Version.1:POS:str:1.7-dev1-868ab3-148 1.Version.2:POS:str:1.7-dev1-868ab3-148 2.Release_date.1:POS:str:2016/03/11 2.Release_date.2:POS:str:2016/03/11 3.Nbproc.1:CGS:u32:2 3.Nbproc.2:CGS:u32:2 4.Process_num.1:KGP:u32:1 4.Process_num.2:KGP:u32:2 5.Pid.1:SGP:u32:30120 5.Pid.2:SGP:u32:30121 6.Uptime.1:MDP:str:0d 0h01m28s 6.Uptime.2:MDP:str:0d 0h01m28s (...) ``` ``` The format of JSON output is described in a schema which may be output using "[show schema json](#show%20schema%20json)". The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example : $ echo "show info json" | socat /var/run/haproxy.sock stdio | \ python -m json.tool The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example : $ echo "show info json" | socat /var/run/haproxy.sock stdio | \ python -m json.tool ``` **show libs** ``` Dump the list of loaded shared dynamic libraries and object files, on systems that support it. When available, for each shared object the range of virtual addresses will be indicated, the size and the path to the object. This can be used for example to try to estimate what library provides a function that appears in a dump. Note that on many systems, addresses will change upon each restart (address space randomization), so that this list would need to be retrieved upon startup if it is expected to be used to analyse a core file. This command may only be issued on sockets configured for levels "[operator](#operator)" or "admin". Note that the output format may vary between operating systems, architectures and even haproxy versions, and ought not to be relied on in scripts. ``` **show map** [[@<ver>] <map>] ``` Dump info about map converters. Without argument, the list of all available maps is returned. If a <map> is specified, its contents are dumped. <map> is the #<id> or <file>. By default the current version of the map is shown (the version currently being matched against and reported as 'curr_ver' in the map list). It is possible to instead dump other versions by prepending '@<ver>' before the map's identifier. The version works as a filter and non-existing versions will simply report no result. The 'entry_cnt' value represents the count of all the map entries, not just the active ones, which means that it also includes entries currently being added. In the output, the first column is a unique entry identifier, which is usable as a reference for operations "[del map](#del%20map)" and "[set map](#set%20map)". The second column is the pattern and the third column is the sample if available. The data returned are not directly a list of available maps, but are the list of all patterns composing any map. Many of these patterns can be shared with ACL. ``` **show peers** [dict|-] [<peers section>] ``` Dump info about the peers configured in "peers" sections. Without argument, the list of the peers belonging to all the "peers" sections are listed. If <peers section> is specified, only the information about the peers belonging to this "peers" section are dumped. When "dict" is specified before the peers section name, the entire Tx/Rx dictionary caches will also be dumped (very large). Passing "-" may be required to dump a peers section called "dict". Here are two examples of outputs where hostA, hostB and hostC peers belong to "sharedlb" peers sections. Only hostA and hostB are connected. Only hostA has sent data to hostB. $ echo "[show peers](#show%20peers)" | socat - /tmp/hostA 0x55deb0224320: [15/Apr/2019:11:28:01] id=sharedlb state=0 flags=0x3 \ resync_timeout=<PAST> task_calls=45122 0x55deb022b540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \ reconnect=4s confirm=0 flags=0x0 0x55deb022a440: id=hostA(local) addr=127.0.0.10:10000 status=NONE \ reconnect=<NEVER> confirm=0 flags=0x0 0x55deb0227d70: id=hostB(remote) addr=127.0.0.11:10001 status=ESTA reconnect=2s confirm=0 flags=0x20000200 appctx:0x55deb028fba0 st0=7 st1=0 task_calls=14456 \ state=EST xprt=RAW src=127.0.0.1:37257 addr=127.0.0.10:10000 remote_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1 last_local_table:0x55deb0224a10 id=stkt local_id=1 remote_id=1 shared tables: 0x55deb0224a10 local_id=1 remote_id=1 flags=0x0 remote_data=0x65 last_acked=0 last_pushed=3 last_get=0 teaching_origin=0 update=3 table:0x55deb022d6a0 id=stkt update=3 localupdate=3 \ commitupdate=3 syncing=0 $ echo "[show peers](#show%20peers)" | socat - /tmp/hostB 0x55871b5ab320: [15/Apr/2019:11:28:03] id=sharedlb state=0 flags=0x3 \ resync_timeout=<PAST> task_calls=3 0x55871b5b2540: id=hostC(remote) addr=127.0.0.12:10002 status=CONN \ reconnect=3s confirm=0 flags=0x0 0x55871b5b1440: id=hostB(local) addr=127.0.0.11:10001 status=NONE \ reconnect=<NEVER> confirm=0 flags=0x0 0x55871b5aed70: id=hostA(remote) addr=127.0.0.10:10000 status=ESTA \ reconnect=2s confirm=0 flags=0x20000200 appctx:0x7fa46800ee00 st0=7 st1=0 task_calls=62356 \ state=EST remote_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1 last_local_table:0x55871b5ab960 id=stkt local_id=1 remote_id=1 shared tables: 0x55871b5ab960 local_id=1 remote_id=1 flags=0x0 remote_data=0x65 last_acked=3 last_pushed=0 last_get=3 teaching_origin=0 update=0 table:0x55871b5b46a0 id=stkt update=1 localupdate=0 \ commitupdate=0 syncing=0 ``` **show pools** [byname|bysize|byusage] [match <pfx>] [<nb>] ``` Dump the status of internal memory pools. This is useful to track memory usage when suspecting a memory leak for example. It does exactly the same as the SIGQUIT when running in foreground except that it does not flush the pools. The output is not sorted by default. If "byname" is specified, it is sorted by pool name; if "bysize" is specified, it is sorted by item size in reverse order; if "byusage" is specified, it is sorted by total usage in reverse order, and only used entries are shown. It is also possible to limit the output to the <nb> first entries (e.g. when sorting by usage). Finally, if "match" followed by a prefix is specified, then only pools whose name starts with this prefix will be shown. The reported total only concerns pools matching the filtering criteria. Example: $ socat - /tmp/haproxy.sock <<< "show pools match quic byusage" Dumping pools usage. Use SIGQUIT to flush them. - Pool quic_conn_r (65560 bytes) : 1337 allocated (87653720 bytes), ... - Pool quic_crypto (1048 bytes) : 6685 allocated (7005880 bytes), ... - Pool quic_conn (4056 bytes) : 1337 allocated (5422872 bytes), ... - Pool quic_rxbuf (262168 bytes) : 8 allocated (2097344 bytes), ... - Pool quic_connne (184 bytes) : 9359 allocated (1722056 bytes), ... - Pool quic_frame (184 bytes) : 7938 allocated (1460592 bytes), ... - Pool quic_tx_pac (152 bytes) : 6454 allocated (981008 bytes), ... - Pool quic_tls_ke (56 bytes) : 12033 allocated (673848 bytes), ... - Pool quic_rx_pac (408 bytes) : 1596 allocated (651168 bytes), ... - Pool quic_tls_se (88 bytes) : 6685 allocated (588280 bytes), ... - Pool quic_cstrea (88 bytes) : 4011 allocated (352968 bytes), ... - Pool quic_tls_iv (24 bytes) : 12033 allocated (288792 bytes), ... - Pool quic_dgram (344 bytes) : 732 allocated (251808 bytes), ... - Pool quic_arng (56 bytes) : 4011 allocated (224616 bytes), ... - Pool quic_conn_c (152 bytes) : 1337 allocated (203224 bytes), ... Total: 15 pools, 109578176 bytes allocated, 109578176 used ... ``` **show profiling** [{all | status | tasks | memory}] [byaddr|bytime|aggr|<max\_lines>]\* ``` Dumps the current profiling settings, one per line, as well as the command needed to change them. When tasks profiling is enabled, some per-function statistics collected by the scheduler will also be emitted, with a summary covering the number of calls, total/avg CPU time and total/avg latency. When memory profiling is enabled, some information such as the number of allocations/releases and their sizes will be reported. It is possible to limit the dump to only the profiling status, the tasks, or the memory profiling by specifying the respective keywords; by default all profiling information are dumped. It is also possible to limit the number of lines of output of each category by specifying a numeric limit. If is possible to request that the output is sorted by address or by total execution time instead of usage, e.g. to ease comparisons between subsequent calls or to check what needs to be optimized, and to aggregate task activity by called function instead of seeing the details. Please note that profiling is essentially aimed at developers since it gives hints about where CPU cycles or memory are wasted in the code. There is nothing useful to monitor there. ``` **show resolvers** [<resolvers section id>] ``` Dump statistics for the given resolvers section, or all resolvers sections if no section is supplied. For each name server, the following counters are reported: sent: number of DNS requests sent to this server valid: number of DNS valid responses received from this server update: number of DNS responses used to update the server's IP address cname: number of CNAME responses cname_error: CNAME errors encountered with this server any_err: number of empty response (IE: server does not support ANY type) nx: non existent domain response received from this server timeout: how many time this server did not answer in time refused: number of requests refused by this server other: any other DNS errors invalid: invalid DNS response (from a protocol point of view) too_big: too big response outdated: number of response arrived too late (after an other name server) ``` **show servers conn** [<backend>] ``` Dump the current and idle connections state of the servers belonging to the designated backend (or all backends if none specified). A backend name or identifier may be used. The output consists in a header line showing the fields titles, then one server per line with for each, the backend name and ID, server name and ID, the address, port and a series or values. The number of fields varies depending on thread count. Given the threaded nature of idle connections, it's important to understand that some values may change once read, and that as such, consistency within a line isn't granted. This output is mostly provided as a debugging tool and is not relevant to be routinely monitored nor graphed. ``` **show servers state** [<backend>] ``` Dump the state of the servers found in the running configuration. A backend name or identifier may be provided to limit the output to this backend only. The dump has the following format: - first line contains the format version (1 in this specification); - second line contains the column headers, prefixed by a sharp ('#'); - third line and next ones contain data; - each line starting by a sharp ('#') is considered as a comment. Since multiple versions of the output may co-exist, below is the list of fields and their order per file format version : 1: be_id: Backend unique id. be_name: Backend label. srv_id: Server unique id (in the backend). srv_name: Server label. srv_addr: Server IP address. srv_op_state: Server operational state (UP/DOWN/...). 0 = SRV_ST_STOPPED The server is down. 1 = SRV_ST_STARTING The server is warming up (up but throttled). 2 = SRV_ST_RUNNING The server is fully up. 3 = SRV_ST_STOPPING The server is up but soft-stopping (eg: 404). srv_admin_state: Server administrative state (MAINT/DRAIN/...). The state is actually a mask of values : 0x01 = SRV_ADMF_FMAINT The server was explicitly forced into maintenance. 0x02 = SRV_ADMF_IMAINT The server has inherited the maintenance status from a tracked server. 0x04 = SRV_ADMF_CMAINT The server is in maintenance because of the configuration. 0x08 = SRV_ADMF_FDRAIN The server was explicitly forced into drain state. 0x10 = SRV_ADMF_IDRAIN The server has inherited the drain status from a tracked server. 0x20 = SRV_ADMF_RMAINT The server is in maintenance because of an IP address resolution failure. 0x40 = SRV_ADMF_HMAINT The server FQDN was set from stats socket. srv_uweight: User visible server's weight. srv_iweight: Server's initial weight. srv_time_since_last_change: Time since last operational change. srv_check_status: Last health check status. srv_check_result: Last check result (FAILED/PASSED/...). 0 = CHK_RES_UNKNOWN Initialized to this by default. 1 = CHK_RES_NEUTRAL Valid check but no status information. 2 = CHK_RES_FAILED Check failed. 3 = CHK_RES_PASSED Check succeeded and server is fully up again. 4 = CHK_RES_CONDPASS Check reports the server doesn't want new sessions. srv_check_health: Checks rise / fall current counter. srv_check_state: State of the check (ENABLED/PAUSED/...). The state is actually a mask of values : 0x01 = CHK_ST_INPROGRESS A check is currently running. 0x02 = CHK_ST_CONFIGURED This check is configured and may be enabled. 0x04 = CHK_ST_ENABLED This check is currently administratively enabled. 0x08 = CHK_ST_PAUSED Checks are paused because of maintenance (health only). srv_agent_state: State of the agent check (ENABLED/PAUSED/...). This state uses the same mask values as "srv_check_state", adding this specific one : 0x10 = CHK_ST_AGENT Check is an agent check (otherwise it's a health check). bk_f_forced_id: Flag to know if the backend ID is forced by configuration. srv_f_forced_id: Flag to know if the server's ID is forced by configuration. srv_fqdn: Server FQDN. srv_port: Server port. srvrecord: DNS SRV record associated to this SRV. srv_use_ssl: use ssl for server connections. srv_check_port: Server health check port. srv_check_addr: Server health check address. srv_agent_addr: Server health agent address. srv_agent_port: Server health agent port. ``` **show sess** ``` Dump all known sessions. Avoid doing this on slow connections as this can be huge. This command is restricted and can only be issued on sockets configured for levels "[operator](#operator)" or "admin". Note that on machines with quickly recycled connections, it is possible that this output reports less entries than really exist because it will dump all existing sessions up to the last one that was created before the command was entered; those which die in the mean time will not appear. ``` **show sess** <id> ``` Display a lot of internal information about the specified session identifier. This identifier is the first field at the beginning of the lines in the dumps of "[show sess](#show%20sess)" (it corresponds to the session pointer). Those information are useless to most users but may be used by haproxy developers to troubleshoot a complex bug. The output format is intentionally not documented so that it can freely evolve depending on demands. You may find a description of all fields returned in src/dumpstats.c The special id "all" dumps the states of all sessions, which must be avoided as much as possible as it is highly CPU intensive and can take a lot of time. ``` **show stat** [domain <dns|proxy>] [{<iid>|<proxy>} <type> <sid>] [typed|json] \ [desc] [up|no-maint] ``` Dump statistics. The domain is used to select which statistics to print; dns and proxy are available for now. By default, the CSV format is used; you can activate the extended typed output format described in the section above if "typed" is passed after the other arguments; or in JSON if "json" is passed after the other arguments. By passing <id>, <type> and <sid>, it is possible to dump only selected items : - <iid> is a proxy ID, -1 to dump everything. Alternatively, a proxy name <proxy> may be specified. In this case, this proxy's ID will be used as the ID selector. - <type> selects the type of dumpable objects : 1 for frontends, 2 for backends, 4 for servers, -1 for everything. These values can be ORed, for example: 1 + 2 = 3 -> frontend + backend. 1 + 2 + 4 = 7 -> frontend + backend + server. - <sid> is a server ID, -1 to dump everything from the selected proxy. ``` Example : ``` $ echo "show info;show stat" | socat stdio unix-connect:/tmp/sock1 >>> Name: HAProxy Version: 1.4-dev2-49 Release_date: 2009/09/23 Nbproc: 1 Process_num: 1 (...) # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq, (...) stats,FRONTEND,,,0,0,1000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,1,0, (...) stats,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,250,(...) (...) www1,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,250, (...) $ ``` ``` In this example, two commands have been issued at once. That way it's easy to find which process the stats apply to in multi-process mode. This is not needed in the typed output format as the process number is reported on each line. Notice the empty line after the information output which marks the end of the first block. A similar empty line appears at the end of the second block (stats) so that the reader knows the output has not been truncated. When "typed" is specified, the output format is more suitable to monitoring tools because it provides numeric positions and indicates the type of each output field. Each value stands on its own line with process number, element number, nature, origin and scope. This same format is available via the HTTP stats by passing ";typed" after the URI. It is very important to note that in typed output format, the dump for a single object is contiguous so that there is no need for a consumer to store everything at once. The "up" modifier will result in listing only servers which reportedly up or not checked. Those down, unresolved, or in maintenance will not be listed. This is analogous to the ";up" option on the HTTP stats. Similarly, the "no-maint" modifier will act like the ";no-maint" HTTP modifier and will result in disabled servers not to be listed. The difference is that those which are enabled but down will not be evicted. When using the typed output format, each line is made of 4 columns delimited by colons (':'). The first column is a dot-delimited series of 5 elements. The first element is a letter indicating the type of the object being described. At the moment the following object types are known : 'F' for a frontend, 'B' for a backend, 'L' for a listener, and 'S' for a server. The second element The second element is a positive integer representing the unique identifier of the proxy the object belongs to. It is equivalent to the "iid" column of the CSV output and matches the value in front of the optional "id" directive found in the frontend or backend section. The third element is a positive integer containing the unique object identifier inside the proxy, and corresponds to the "sid" column of the CSV output. ID 0 is reported when dumping a frontend or a backend. For a listener or a server, this corresponds to their respective ID inside the proxy. The fourth element is the numeric position of the field in the list (starting at zero). This position shall not change over time, but holes are to be expected, depending on build options or if some fields are deleted in the future. The fifth element is the field name as it appears in the CSV output. The sixth element is a positive integer and is the relative process number starting at 1. The rest of the line starting after the first colon follows the "typed output format" described in the section above. In short, the second column (after the first ':') indicates the origin, nature and scope of the variable. The third column indicates the field type, among "s32", "s64", "u32", "u64", "flt' and "str". Then the fourth column is the value itself, which the consumer knows how to parse thanks to column 3 and how to process thanks to column 2. When "desc" is appended to the command, one extra colon followed by a quoted string is appended with a description for the metric. At the time of writing, this is only supported for the "typed" output format. Thus the overall line format in typed mode is : <obj>.<px_id>.<id>.<fpos>.<fname>.<process_num>:<tags>:<type>:<value> Here's an example of typed output format : $ echo "show stat typed" | socat stdio unix-connect:/tmp/sock1 F.2.0.0.pxname.1:MGP:str:private-frontend F.2.0.1.svname.1:MGP:str:FRONTEND F.2.0.8.bin.1:MGP:u64:0 F.2.0.9.bout.1:MGP:u64:0 F.2.0.40.hrsp_2xx.1:MGP:u64:0 L.2.1.0.pxname.1:MGP:str:private-frontend L.2.1.1.svname.1:MGP:str:sock-1 L.2.1.17.status.1:MGP:str:OPEN L.2.1.73.addr.1:MGP:str:0.0.0.0:8001 S.3.13.60.rtime.1:MCP:u32:0 S.3.13.61.ttime.1:MCP:u32:0 S.3.13.62.agent_status.1:MGP:str:L4TOUT S.3.13.64.agent_duration.1:MGP:u64:2001 S.3.13.65.check_desc.1:MCP:str:Layer4 timeout S.3.13.66.agent_desc.1:MCP:str:Layer4 timeout S.3.13.67.check_rise.1:MCP:u32:2 S.3.13.68.check_fall.1:MCP:u32:3 S.3.13.69.check_health.1:SGP:u32:0 S.3.13.70.agent_rise.1:MaP:u32:1 S.3.13.71.agent_fall.1:SGP:u32:1 S.3.13.72.agent_health.1:SGP:u32:1 S.3.13.73.addr.1:MCP:str:1.255.255.255:8888 S.3.13.75.mode.1:MAP:str:http B.3.0.0.pxname.1:MGP:str:private-backend B.3.0.1.svname.1:MGP:str:BACKEND B.3.0.2.qcur.1:MGP:u32:0 B.3.0.3.qmax.1:MGP:u32:0 B.3.0.4.scur.1:MGP:u32:0 B.3.0.5.smax.1:MGP:u32:0 B.3.0.6.slim.1:MGP:u32:1000 B.3.0.55.lastsess.1:MMP:s32:-1 (...) In the typed format, the presence of the process ID at the end of the first column makes it very easy to visually aggregate outputs from multiple processes, as show in the example below where each line appears for each process : $ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \ echo show stat typed | socat /var/run/haproxy.sock2 - ) | \ sort -t . -k 1,1 -k 2,2n -k 3,3n -k 4,4n -k 5,5 -k 6,6n B.3.0.0.pxname.1:MGP:str:private-backend B.3.0.0.pxname.2:MGP:str:private-backend B.3.0.1.svname.1:MGP:str:BACKEND B.3.0.1.svname.2:MGP:str:BACKEND B.3.0.2.qcur.1:MGP:u32:0 B.3.0.2.qcur.2:MGP:u32:0 B.3.0.3.qmax.1:MGP:u32:0 B.3.0.3.qmax.2:MGP:u32:0 B.3.0.4.scur.1:MGP:u32:0 B.3.0.4.scur.2:MGP:u32:0 B.3.0.5.smax.1:MGP:u32:0 B.3.0.5.smax.2:MGP:u32:0 B.3.0.6.slim.1:MGP:u32:1000 B.3.0.6.slim.2:MGP:u32:1000 (...) The format of JSON output is described in a schema which may be output using "[show schema json](#show%20schema%20json)". The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example : $ echo "show stat json" | socat /var/run/haproxy.sock stdio | \ python -m json.tool The JSON output contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example : $ echo "show stat json" | socat /var/run/haproxy.sock stdio | \ python -m json.tool ``` **show ssl ca-file** [<cafile>[:<index>]] ``` Display the list of CA files used by HAProxy and their respective certificate counts. If a filename is prefixed by an asterisk, it is a transaction which is not committed yet. If a <cafile> is specified without <index>, it will show the status of the CA file ("Used"/"Unused") followed by details about all the certificates contained in the CA file. The details displayed for every certificate are the same as the ones displayed by a "[show ssl cert](#show%20ssl%20cert)" command. If a <cafile> is specified followed by an <index>, it will only display the details of the certificate having the specified index. Indexes start from 1. If the index is invalid (too big for instance), nothing will be displayed. This command can be useful to check if a CA file was properly updated. You can also display the details of an ongoing transaction by prefixing the filename by an asterisk. ``` Example : ``` $ echo "[show ssl ca-file](#show%20ssl%20ca-file)" | socat /var/run/haproxy.master - # transaction *cafile.crt - 2 certificate(s) # filename cafile.crt - 1 certificate(s) $ echo "show ssl ca-file cafile.crt" | socat /var/run/haproxy.master - Filename: /home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt Status: Used Certificate #1: Serial: 11A4D2200DC84376E7D233CAFF39DF44BF8D1211 notBefore: Apr 1 07:40:53 2021 GMT notAfter: Aug 17 07:40:53 2048 GMT Subject Alternative Name: Algorithm: RSA4096 SHA1 FingerPrint: A111EF0FEFCDE11D47FE3F33ADCA8435EBEA4864 Subject: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA Issuer: /C=FR/ST=Some-State/O=HAProxy Technologies/CN=HAProxy Technologies CA $ echo "show ssl ca-file *cafile.crt:2" | socat /var/run/haproxy.master - Filename: */home/tricot/work/haproxy/reg-tests/ssl/set_cafile_ca2.crt Status: Unused Certificate #2: Serial: 587A1CE5ED855040A0C82BF255FF300ADB7C8136 [...] ``` **show ssl cert** [<filename>] ``` Display the list of certificates used on frontends and backends. If a filename is prefixed by an asterisk, it is a transaction which is not committed yet. If a filename is specified, it will show details about the certificate. This command can be useful to check if a certificate was well updated. You can also display details on a transaction by prefixing the filename by an asterisk. This command can also be used to display the details of a certificate's OCSP response by suffixing the filename with a ".ocsp" extension. It works for committed certificates as well as for ongoing transactions. On a committed certificate, this command is equivalent to calling "[show ssl ocsp-response](#show%20ssl%20ocsp-response)" with the certificate's corresponding OCSP response ID. ``` Example : ``` $ echo "@1 show ssl cert" | socat /var/run/haproxy.master - # transaction *test.local.pem # filename test.local.pem $ echo "@1 show ssl cert test.local.pem" | socat /var/run/haproxy.master - Filename: test.local.pem Serial: 03ECC19BA54B25E85ABA46EE561B9A10D26F notBefore: Sep 13 21:20:24 2019 GMT notAfter: Dec 12 21:20:24 2019 GMT Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 Subject: /CN=test.local Subject Alternative Name: DNS:test.local, DNS:imap.test.local Algorithm: RSA2048 SHA1 FingerPrint: 417A11CAE25F607B24F638B4A8AEE51D1E211477 $ echo "@1 show ssl cert *test.local.pem" | socat /var/run/haproxy.master - Filename: *test.local.pem [...] ``` **show ssl crl-file** [<crlfile>[:<index>]] ``` Display the list of CRL files used by HAProxy. If a filename is prefixed by an asterisk, it is a transaction which is not committed yet. If a <crlfile> is specified without <index>, it will show the status of the CRL file ("Used"/"Unused") followed by details about all the Revocation Lists contained in the CRL file. The details displayed for every list are based on the output of "openssl crl -text -noout -in <file>". If a <crlfile> is specified followed by an <index>, it will only display the details of the list having the specified index. Indexes start from 1. If the index is invalid (too big for instance), nothing will be displayed. This command can be useful to check if a CRL file was properly updated. You can also display the details of an ongoing transaction by prefixing the filename by an asterisk. ``` Example : ``` $ echo "[show ssl crl-file](#show%20ssl%20crl-file)" | socat /var/run/haproxy.master - # transaction *crlfile.pem # filename crlfile.pem $ echo "show ssl crl-file crlfile.pem" | socat /var/run/haproxy.master - Filename: /home/tricot/work/haproxy/reg-tests/ssl/crlfile.pem Status: Used Certificate Revocation List #1: Version 1 Signature Algorithm: sha256WithRSAEncryption Issuer: /C=FR/O=HAProxy Technologies/CN=Intermediate CA2 Last Update: Apr 23 14:45:39 2021 GMT Next Update: Sep 8 14:45:39 2048 GMT Revoked Certificates: Serial Number: 1008 Revocation Date: Apr 23 14:45:36 2021 GMT Certificate Revocation List #2: Version 1 Signature Algorithm: sha256WithRSAEncryption Issuer: /C=FR/O=HAProxy Technologies/CN=Root CA Last Update: Apr 23 14:30:44 2021 GMT Next Update: Sep 8 14:30:44 2048 GMT No Revoked Certificates. ``` **show ssl crt-list** [-n] [<filename>] ``` Display the list of crt-list and directories used in the HAProxy configuration. If a filename is specified, dump the content of a crt-list or a directory. Once dumped the output can be used as a crt-list file. The '-n' option can be used to display the line number, which is useful when combined with the 'del ssl crt-list' option when a entry is duplicated. The output with the '-n' option is not compatible with the crt-list format and not loadable by haproxy. ``` Example: ``` echo "show ssl crt-list -n localhost.crt-list" | socat /tmp/sock1 - # localhost.crt-list common.pem:1 !not.test1.com *.test1.com !localhost common.pem:2 ecdsa.pem:3 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3] localhost !www.test1.com ecdsa.pem:4 [verify none allow-0rtt ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3] ``` **show ssl ocsp-response** [<id>] ``` Display the IDs of the OCSP tree entries corresponding to all the OCSP responses used in HAProxy, as well as the issuer's name and key hash and the serial number of the certificate for which the OCSP response was built. If a valid <id> is provided, display the contents of the corresponding OCSP response. The information displayed is the same as in an "openssl ocsp -respin <ocsp-response> -text" call. ``` Example : ``` $ echo "[show ssl ocsp-response](#show%20ssl%20ocsp-response)" | socat /var/run/haproxy.master - # Certificate IDs Certificate ID key : 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a Certificate ID: Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A Serial Number: 100A $ echo "show ssl ocsp-response 303b300906052b0e03021a050004148a83e0060faff709ca7e9b95522a2e81635fda0a0414f652b0e435d5ea923851508f0adbe92d85de007a0202100a" | socat /var/run/haproxy.master - OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response Version: 1 (0x0) Responder Id: C = FR, O = HAProxy Technologies, CN = ocsp.haproxy.com Produced At: May 27 15:43:38 2021 GMT Responses: Certificate ID: Hash Algorithm: sha1 Issuer Name Hash: 8A83E0060FAFF709CA7E9B95522A2E81635FDA0A Issuer Key Hash: F652B0E435D5EA923851508F0ADBE92D85DE007A Serial Number: 100A Cert Status: good This Update: May 27 15:43:38 2021 GMT Next Update: Oct 12 15:43:38 2048 GMT [...] ``` **show ssl providers** ``` Display the names of the providers loaded by OpenSSL during init. Provider loading can indeed be configured via the OpenSSL configuration file and this option allows to check that the right providers were loaded. This command is only available with OpenSSL v3. ``` Example : ``` $ echo "[show ssl providers](#show%20ssl%20providers)" | socat /var/run/haproxy.master - Loaded providers : - fips - base ``` **show startup-logs** ``` Dump all messages emitted during the startup of the current haproxy process, each startup-logs buffer is unique to its haproxy worker. This keyword also exists on the master CLI, which shows the latest startup or reload tentative. ``` **show table** ``` Dump general information on all known stick-tables. Their name is returned (the name of the proxy which holds them), their type (currently zero, always IP), their size in maximum possible number of entries, and the number of entries currently in use. ``` Example : ``` $ echo "[show table](#show%20table)" | socat stdio /tmp/sock1 >>> # table: front\_pub, type: ip, size:204800, used:171454 >>> # table: back\_rdp, type: ip, size:204800, used:0 ``` **show table** <name> [ data.<type> <operator> <value> [data.<type> ...]] | [ key <key> ] ``` Dump contents of stick-table <name>. In this mode, a first line of generic information about the table is reported as with "[show table](#show%20table)", then all entries are dumped. Since this can be quite heavy, it is possible to specify a filter in order to specify what entries to display. When the "data." form is used the filter applies to the stored data (see "stick-table" in [section 4.2](#4.2)). A stored data type must be specified in <type>, and this data type must be stored in the table otherwise an error is reported. The data is compared according to <operator> with the 64-bit integer <value>. Operators are the same as with the ACLs : - eq : match entries whose data is equal to this value - ne : match entries whose data is not equal to this value - le : match entries whose data is less than or equal to this value - ge : match entries whose data is greater than or equal to this value - lt : match entries whose data is less than this value - gt : match entries whose data is greater than this value In this form, you can use multiple data filter entries, up to a maximum defined during build time (4 by default). When the key form is used the entry <key> is shown. The key must be of the same type as the table, which currently is limited to IPv4, IPv6, integer, and string. ``` Example : ``` $ echo "show table http_proxy" | socat stdio /tmp/sock1 >>> # table: http\_proxy, type: ip, size:204800, used:2 >>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \ bytes_out_rate(60000)=187 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 $ echo "show table http_proxy data.gpc0 gt 0" | socat stdio /tmp/sock1 >>> # table: http\_proxy, type: ip, size:204800, used:2 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 $ echo "show table http_proxy data.conn_rate gt 5" | \ socat stdio /tmp/sock1 >>> # table: http\_proxy, type: ip, size:204800, used:2 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 $ echo "show table http_proxy key 127.0.0.2" | \ socat stdio /tmp/sock1 >>> # table: http\_proxy, type: ip, size:204800, used:2 >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \ bytes_out_rate(60000)=191 ``` ``` When the data criterion applies to a dynamic value dependent on time such as a bytes rate, the value is dynamically computed during the evaluation of the entry in order to decide whether it has to be dumped or not. This means that such a filter could match for some time then not match anymore because as time goes, the average event rate drops. It is possible to use this to extract lists of IP addresses abusing the service, in order to monitor them or even blacklist them in a firewall. ``` Example : ``` $ echo "show table http_proxy data.gpc0 gt 0" \ | socat stdio /tmp/sock1 \ | fgrep 'key=' | cut -d' ' -f2 | cut -d= -f2 > abusers-ip.txt ( or | awk '/key/{ print a[split($2,a,"=")]; }' ) ``` ``` When the stick-table is synchronized to a peers section supporting sharding, the shard number will be displayed for each key (otherwise '0' is reported). This allows to know which peers will receive this key. ``` Example: ``` $ echo "show table http_proxy" | socat stdio /tmp/sock1 | fgrep shard= 0x7f23b0c822a8: key=10.0.0.2 use=0 exp=296398 shard=9 gpc0=0 0x7f23a063f948: key=10.0.0.6 use=0 exp=296075 shard=12 gpc0=0 0x7f23b03920b8: key=10.0.0.8 use=0 exp=296766 shard=1 gpc0=0 0x7f23a43c09e8: key=10.0.0.12 use=0 exp=295368 shard=8 gpc0=0 ``` **show tasks** ``` Dumps the number of tasks currently in the run queue, with the number of occurrences for each function, and their average latency when it's known (for pure tasks with task profiling enabled). The dump is a snapshot of the instant it's done, and there may be variations depending on what tasks are left in the queue at the moment it happens, especially in mono-thread mode as there's less chance that I/Os can refill the queue (unless the queue is full). This command takes exclusive access to the process and can cause minor but measurable latencies when issued on a highly loaded process, so it must not be abused by monitoring bots. ``` **show threads** ``` Dumps some internal states and structures for each thread, that may be useful to help developers understand a problem. The output tries to be readable by showing one block per thread. When haproxy is built with USE_THREAD_DUMP=1, an advanced dump mechanism involving thread signals is used so that each thread can dump its own state in turn. Without this option, the thread processing the command shows all its details but the other ones are less detailed. A star ('*') is displayed in front of the thread handling the command. A right angle bracket ('>') may also be displayed in front of threads which didn't make any progress since last invocation of this command, indicating a bug in the code which must absolutely be reported. When this happens between two threads it usually indicates a deadlock. If a thread is alone, it's a different bug like a corrupted list. In all cases the process needs is not fully functional anymore and needs to be restarted. The output format is purposely not documented so that it can easily evolve as new needs are identified, without having to maintain any form of backwards compatibility, and just like with "[show activity](#show%20activity)", the values are meaningless without the code at hand. ``` **show tls-keys** [id|\*] ``` Dump all loaded TLS ticket keys references. The TLS ticket key reference ID and the file from which the keys have been loaded is shown. Both of those can be used to update the TLS keys using "[set ssl tls-key](#set%20ssl%20tls-key)". If an ID is specified as parameter, it will dump the tickets, using * it will dump every keys from every references. ``` **show schema json** ``` Dump the schema used for the output of "show info json" and "show stat json". The contains no extra whitespace in order to reduce the volume of output. For human consumption passing the output through a pretty printer may be helpful. Example : $ echo "[show schema json](#show%20schema%20json)" | socat /var/run/haproxy.sock stdio | \ python -m json.tool The schema follows "JSON Schema" (json-schema.org) and accordingly verifiers may be used to verify the output of "show info json" and "show stat json" against the schema. ``` **show trace** [<source>] ``` Show the current trace status. For each source a line is displayed with a single-character status indicating if the trace is stopped, waiting, or running. The output sink used by the trace is indicated (or "none" if none was set), as well as the number of dropped events in this sink, followed by a brief description of the source. If a source name is specified, a detailed list of all events supported by the source, and their status for each action (report, start, pause, stop), indicated by a "+" if they are enabled, or a "-" otherwise. All these events are independent and an event might trigger a start without being reported and conversely. ``` **show version** ``` Show the version of the current HAProxy process. This is available from master and workers CLI. ``` Example: ``` $ echo "[show version](#show%20version)" | socat /var/run/haproxy.sock stdio 2.4.9 $ echo "[show version](#show%20version)" | socat /var/run/haproxy-master.sock stdio 2.5.0 ``` **shutdown frontend** <frontend> ``` Completely delete the specified frontend. All the ports it was bound to will be released. It will not be possible to enable the frontend anymore after this operation. This is intended to be used in environments where stopping a proxy is not even imaginable but a misconfigured proxy must be fixed. That way it's possible to release the port and bind it into another process to restore operations. The frontend will not appear at all on the stats page once it is terminated. The frontend may be specified either by its name or by its numeric ID, prefixed with a sharp ('#'). This command is restricted and can only be issued on sockets configured for level "admin". ``` **shutdown session** <id> ``` Immediately terminate the session matching the specified session identifier. This identifier is the first field at the beginning of the lines in the dumps of "[show sess](#show%20sess)" (it corresponds to the session pointer). This can be used to terminate a long-running session without waiting for a timeout or when an endless transfer is ongoing. Such terminated sessions are reported with a 'K' flag in the logs. ``` **shutdown sessions server** <backend>/<server> ``` Immediately terminate all the sessions attached to the specified server. This can be used to terminate long-running sessions after a server is put into maintenance mode, for instance. Such terminated sessions are reported with a 'K' flag in the logs. ``` **trace** ``` The "[trace](#trace)" command alone lists the trace sources, their current status, and their brief descriptions. It is only meant as a menu to enter next levels, see other "[trace](#trace)" commands below. ``` **trace 0** ``` Immediately stops all traces. This is made to be used as a quick solution to terminate a debugging session or as an emergency action to be used in case complex traces were enabled on multiple sources and impact the service. ``` **trace** <source> event [ [+|-|!]<name> ] ``` Without argument, this will list all the events supported by the designated source. They are prefixed with a "-" if they are not enabled, or a "+" if they are enabled. It is important to note that a single trace may be labelled with multiple events, and as long as any of the enabled events matches one of the events labelled on the trace, the event will be passed to the trace subsystem. For example, receiving an HTTP/2 frame of type HEADERS may trigger a frame event and a stream event since the frame creates a new stream. If either the frame event or the stream event are enabled for this source, the frame will be passed to the trace framework. With an argument, it is possible to toggle the state of each event and individually enable or disable them. Two special keywords are supported, "none", which matches no event, and is used to disable all events at once, and "any" which matches all events, and is used to enable all events at once. Other events are specific to the event source. It is possible to enable one event by specifying its name, optionally prefixed with '+' for better readability. It is possible to disable one event by specifying its name prefixed by a '-' or a '!'. One way to completely disable a trace source is to pass "event none", and this source will instantly be totally ignored. ``` **trace** <source> level [<level>] ``` Without argument, this will list all trace levels for this source, and the current one will be indicated by a star ('*') prepended in front of it. With an argument, this will change the trace level to the specified level. Detail levels are a form of filters that are applied before reporting the events. These filters are used to selectively include or exclude events depending on their level of importance. For example a developer might need to know precisely where in the code an HTTP header was considered invalid while the end user may not even care about this header's validity at all. There are currently 5 distinct levels for a trace : user this will report information that are suitable for use by a regular haproxy user who wants to observe his traffic. Typically some HTTP requests and responses will be reported without much detail. Most sources will set this as the default level to ease operations. proto in addition to what is reported at the "[user](#user)" level, it also displays protocol-level updates. This can for example be the frame types or HTTP headers after decoding. state in addition to what is reported at the "proto" level, it will also display state transitions (or failed transitions) which happen in parsers, so this will show attempts to perform an operation while the "proto" level only shows the final operation. data in addition to what is reported at the "state" level, it will also include data transfers between the various layers. developer it reports everything available, which can include advanced information such as "breaking out of this loop" that are only relevant to a developer trying to understand a bug that only happens once in a while in field. Function names are only reported at this level. It is highly recommended to always use the "[user](#user)" level only and switch to other levels only if instructed to do so by a developer. Also it is a good idea to first configure the events before switching to higher levels, as it may save from dumping many lines if no filter is applied. ``` **trace** <source> lock [criterion] ``` Without argument, this will list all the criteria supported by this source for lock-on processing, and display the current choice by a star ('*') in front of it. Lock-on means that the source will focus on the first matching event and only stick to the criterion which triggered this event, and ignore all other ones until the trace stops. This allows for example to take a trace on a single connection or on a single stream. The following criteria are supported by some traces, though not necessarily all, since some of them might not be available to the source : backend lock on the backend that started the trace connection lock on the connection that started the trace frontend lock on the frontend that started the trace listener lock on the listener that started the trace nothing do not lock on anything server lock on the server that started the trace session lock on the session that started the trace thread lock on the thread that started the trace In addition to this, each source may provide up to 4 specific criteria such as internal states or connection IDs. For example in HTTP/2 it is possible to lock on the H2 stream and ignore other streams once a strace starts. When a criterion is passed in argument, this one is used instead of the other ones and any existing tracking is immediately terminated so that it can restart with the new criterion. The special keyword "nothing" is supported by all sources to permanently disable tracking. ``` **trace** <source> { pause | start | stop } [ [+|-|!]event] ``` Without argument, this will list the events enabled to automatically pause, start, or stop a trace for this source. These events are specific to each trace source. With an argument, this will either enable the event for the specified action (if optionally prefixed by a '+') or disable it (if prefixed by a '-' or '!'). The special keyword "now" is not an event and requests to take the action immediately. The keywords "none" and "any" are supported just like in "trace event". The 3 supported actions are respectively "pause", "start" and "stop". The "pause" action enumerates events which will cause a running trace to stop and wait for a new start event to restart it. The "start" action enumerates the events which switch the trace into the waiting mode until one of the start events appears. And the "stop" action enumerates the events which definitely stop the trace until it is manually enabled again. In practice it makes sense to manually start a trace using "start now" without caring about events, and to stop it using "stop now". In order to capture more subtle event sequences, setting "start" to a normal event (like receiving an HTTP request) and "stop" to a very rare event like emitting a certain error, will ensure that the last captured events will match the desired criteria. And the pause event is useful to detect the end of a sequence, disable the lock-on and wait for another opportunity to take a capture. In this case it can make sense to enable lock-on to spot only one specific criterion (e.g. a stream), and have "start" set to anything that starts this criterion (e.g. all events which create a stream), "stop" set to the expected anomaly, and "pause" to anything that ends that criterion (e.g. any end of stream event). In this case the trace log will contain complete sequences of perfectly clean series affecting a single object, until the last sequence containing everything from the beginning to the anomaly. ``` **trace** <source> sink [<sink>] Without argument, this will list all event sinks available for this source, and the currently configured one will have a star ('\*') prepended in front of it. Sink "none" is always available and means that all events are simply dropped, though their processing is not ignored (e.g. lock-on does occur). Other sinks are available depending on configuration and build options, but typically "stdout" and "stderr" will be usable in debug mode, and in-memory ring buffers should be available as well. When a name is specified, the sink instantly changes for the specified source. Events are not changed during a sink change. In the worst case some may be lost if an invalid sink is used (or "none"), but operations do continue to a different destination. **trace** <source> verbosity [<level>] ``` Without argument, this will list all verbosity levels for this source, and the current one will be indicated by a star ('*') prepended in front of it. With an argument, this will change the verbosity level to the specified one. Verbosity levels indicate how far the trace decoder should go to provide detailed information. It depends on the trace source, since some sources will not even provide a specific decoder. Level "quiet" is always available and disables any decoding. It can be useful when trying to figure what's happening before trying to understand the details, since it will have a very low impact on performance and trace size. When no verbosity levels are declared by a source, level "default" is available and will cause a decoder to be called when specified in the traces. It is an opportunistic decoding. When the source declares some verbosity levels, these ones are listed with a description of what they correspond to. In this case the trace decoder provided by the source will be as accurate as possible based on the information available at the trace point. The first level above "quiet" is set by default. ``` ### 9.4. Master CLI ``` The master CLI is a socket bound to the master process in master-worker mode. This CLI gives access to the unix socket commands in every running or leaving processes and allows a basic supervision of those processes. The master CLI is configurable only from the haproxy program arguments with the -S option. This option also takes bind options separated by commas. ``` Example: ``` # haproxy -W -S 127.0.0.1:1234 -f test1.cfg # haproxy -Ws -S /tmp/master-socket,uid,1000,gid,1000,mode,600 -f test1.cfg # haproxy -W -S /tmp/master-socket,level,user -f test1.cfg ``` #### 9.4.1. Master CLI commands ``` @<[!]pid> The master CLI uses a special prefix notation to access the multiple processes. This notation is easily identifiable as it begins by a @. A @ prefix can be followed by a relative process number or by an exclamation point and a PID. (e.g. @1 or @!1271). A @ alone could be use to specify the master. Leaving processes are only accessible with the PID as relative process number are only usable with the current processes. ``` Examples: ``` $ socat /var/run/haproxy-master.sock readline prompt master> @1 show info; @2 show info [...] Process_num: 1 Pid: 1271 [...] Process_num: 2 Pid: 1272 [...] master> $ echo '@!1271 show info; @!1272 show info' | socat /var/run/haproxy-master.sock - [...] ``` ``` A prefix could be use as a command, which will send every next commands to the specified process. ``` Examples: ``` $ socat /var/run/haproxy-master.sock readline prompt master> @1 1271> show info [...] 1271> show stat [...] 1271> @ master> $ echo '@1; show info; show stat; @2; show info; show stat' | socat /var/run/haproxy-master.sock - [...] ``` **expert-mode** [on|off] ``` This command activates the "expert-mode" for every worker accessed from the master CLI. Combined with "[mcli-debug-mode](#mcli-debug-mode)" it also activates the command on the master. Display the flag "e" in the master CLI prompt. See also "expert-mode" in Section 9.3 and "[mcli-debug-mode](#mcli-debug-mode)" in 9.4.1. ``` **experimental-mode** [on|off] ``` This command activates the "experimental-mode" for every worker accessed from the master CLI. Combined with "[mcli-debug-mode](#mcli-debug-mode)" it also activates the command on the master. Display the flag "x" in the master CLI prompt. See also "experimental-mode" in Section 9.3 and "[mcli-debug-mode](#mcli-debug-mode)" in 9.4.1. ``` **mcli-debug-mode** [on|off] ``` This keyword allows a special mode in the master CLI which enables every keywords that were meant for a worker CLI on the master CLI, allowing to debug the master process. Once activated, you list the new available keywords with "[help](#help)". Combined with "experimental-mode" or "expert-mode" it enables even more keywords. Display the flag "d" in the master CLI prompt. ``` **prompt** ``` When the prompt is enabled (via the "prompt" command), the context the CLI is working on is displayed in the prompt. The master is identified by the "master" string, and other processes are identified with their PID. In case the last reload failed, the master prompt will be changed to "master[ReloadFailed]>" so that it becomes visible that the process is still running on the previous configuration and that the new configuration is not operational. The prompt of the master CLI is able to display several flags which are the enable modes. "d" for mcli-debug-mode, "e" for expert-mode, "x" for experimental-mode. ``` Example: ``` $ socat /var/run/haproxy-master.sock - prompt master> expert-mode on master(e)> experimental-mode on master(xe)> mcli-debug-mode on master(xed)> @1 95191(xed)> ``` **reload** ``` You can also reload the HAProxy master process with the "[reload](#reload)" command which does the same as a `kill -USR2` on the master process, provided that the user has at least "[operator](#operator)" or "admin" privileges. This command allows you to perform a synchronous reload, the command will return a reload status, once the reload was performed. Be careful with the timeout if a tool is used to parse it, it is only returned once the configuration is parsed and the new worker is forked. The "socat" command uses a timeout of 0.5s by default so it will quits before showing the message if the reload is too long. "ncat" does not have a timeout by default. When compiled with USE_SHM_OPEN=1, the reload command is also able to dump the startup-logs of the master. ``` Example: ``` $ echo "[reload](#reload)" | socat -t300 /var/run/haproxy-master.sock stdin Success=1 -- [NOTICE] (482713) : haproxy version is 2.7-dev7-4827fb-69 [NOTICE] (482713) : path to executable is ./haproxy [WARNING] (482713) : config : 'http-request' rules ignored for proxy 'frt1' as they require HTTP mode. [NOTICE] (482713) : New worker (482720) forked [NOTICE] (482713) : Loading success. $ echo "[reload](#reload)" | socat -t300 /var/run/haproxy-master.sock stdin Success=0 -- [NOTICE] (482886) : haproxy version is 2.7-dev7-4827fb-69 [NOTICE] (482886) : path to executable is ./haproxy [ALERT] (482886) : config : parsing [test3.cfg:1]: unknown keyword 'Aglobal' out of section. [ALERT] (482886) : config : Fatal errors found in configuration. [WARNING] (482886) : Loading failure! $ ``` ``` The reload command is the last executed on the master CLI, every other command after it are ignored. Once the reload command returns its status, it will close the connection to the CLI. Note that a reload will close all connections to the master CLI. ``` **show proc** ``` The master CLI introduces a 'show proc' command to surpervise the processe. ``` Example: ``` $ echo 'show proc' | socat /var/run/haproxy-master.sock - #<PID> <type> <reloads> <uptime> <version> 1162 master 5 [failed: 0] 0d00h02m07s 2.5-dev13 # workers 1271 worker 1 0d00h00m00s 2.5-dev13 # old workers 1233 worker 3 0d00h00m43s 2.0-dev3-6019f6-289 # programs 1244 foo 0 0d00h00m00s - 1255 bar 0 0d00h00m00s - ``` ``` In this example, the master has been reloaded 5 times but one of the old worker is still running and survived 3 reloads. You could access the CLI of this worker to understand what's going on. ``` **show startup-logs** ``` HAProxy needs to be compiled with USE_SHM_OPEN=1 to be used correctly on the master CLI or all messages won't be visible. Like its counterpart on the stats socket, this command is able to show the startup messages of HAProxy. However it does not dump the startup messages of the current worker, but the startup messages of the latest startup or reload, which means it is able to dump the parsing messages of a failed reload. Those messages are also dumped with the "[reload](#reload)" command. ``` 10. Tricks for easier configuration management ----------------------------------------------- ``` It is very common that two HAProxy nodes constituting a cluster share exactly the same configuration modulo a few addresses. Instead of having to maintain a duplicate configuration for each node, which will inevitably diverge, it is possible to include environment variables in the configuration. Thus multiple configuration may share the exact same file with only a few different system wide environment variables. This started in version 1.5 where only addresses were allowed to include environment variables, and 1.6 goes further by supporting environment variables everywhere. The syntax is the same as in the UNIX shell, a variable starts with a dollar sign ('$'), followed by an opening curly brace ('{'), then the variable name followed by the closing brace ('}'). Except for addresses, environment variables are only interpreted in arguments surrounded with double quotes (this was necessary not to break existing setups using regular expressions involving the dollar symbol). Environment variables also make it convenient to write configurations which are expected to work on various sites where only the address changes. It can also permit to remove passwords from some configs. Example below where the the file "site1.env" file is sourced by the init script upon startup : $ cat site1.env LISTEN=192.168.1.1 CACHE_PFX=192.168.11 SERVER_PFX=192.168.22 LOGGER=192.168.33.1 STATSLP=admin:pa$$w0rd ABUSERS=/etc/haproxy/abuse.lst TIMEOUT=10s $ cat haproxy.cfg global log "${LOGGER}:514" local0 defaults mode http timeout client "${TIMEOUT}" timeout server "${TIMEOUT}" timeout connect 5s frontend public bind "${LISTEN}:80" http-request reject if { src -f "${ABUSERS}" } stats uri /stats stats auth "${STATSLP}" use_backend cache if { path_end .jpg .css .ico } default_backend server backend cache server cache1 "${CACHE_PFX}.1:18080" check server cache2 "${CACHE_PFX}.2:18080" check backend server server cache1 "${SERVER_PFX}.1:8080" check server cache2 "${SERVER_PFX}.2:8080" check ``` 11. Well-known traps to avoid ------------------------------ ``` Once in a while, someone reports that after a system reboot, the haproxy service wasn't started, and that once they start it by hand it works. Most often, these people are running a clustered IP address mechanism such as keepalived, to assign the service IP address to the master node only, and while it used to work when they used to bind haproxy to address 0.0.0.0, it stopped working after they bound it to the virtual IP address. What happens here is that when the service starts, the virtual IP address is not yet owned by the local node, so when HAProxy wants to bind to it, the system rejects this because it is not a local IP address. The fix doesn't consist in delaying the haproxy service startup (since it wouldn't stand a restart), but instead to properly configure the system to allow binding to non-local addresses. This is easily done on Linux by setting the net.ipv4.ip_nonlocal_bind sysctl to 1. This is also needed in order to transparently intercept the IP traffic that passes through HAProxy for a specific target address. Multi-process configurations involving source port ranges may apparently seem to work but they will cause some random failures under high loads because more than one process may try to use the same source port to connect to the same server, which is not possible. The system will report an error and a retry will happen, picking another port. A high value in the "retries" parameter may hide the effect to a certain extent but this also comes with increased CPU usage and processing time. Logs will also report a certain number of retries. For this reason, port ranges should be avoided in multi-process configurations. Since HAProxy uses SO_REUSEPORT and supports having multiple independent processes bound to the same IP:port, during troubleshooting it can happen that an old process was not stopped before a new one was started. This provides absurd test results which tend to indicate that any change to the configuration is ignored. The reason is that in fact even the new process is restarted with a new configuration, the old one also gets some incoming connections and processes them, returning unexpected results. When in doubt, just stop the new process and try again. If it still works, it very likely means that an old process remains alive and has to be stopped. Linux's "netstat -lntp" is of good help here. When adding entries to an ACL from the command line (eg: when blacklisting a source address), it is important to keep in mind that these entries are not synchronized to the file and that if someone reloads the configuration, these updates will be lost. While this is often the desired effect (for blacklisting) it may not necessarily match expectations when the change was made as a fix for a problem. See the "[add acl](#add%20acl)" action of the CLI interface. ``` 12. Debugging and performance issues ------------------------------------- ``` When HAProxy is started with the "-d" option, it will stay in the foreground and will print one line per event, such as an incoming connection, the end of a connection, and for each request or response header line seen. This debug output is emitted before the contents are processed, so they don't consider the local modifications. The main use is to show the request and response without having to run a network sniffer. The output is less readable when multiple connections are handled in parallel, though the "debug2ansi" and "debug2html" scripts found in the examples/ directory definitely help here by coloring the output. If a request or response is rejected because HAProxy finds it is malformed, the best thing to do is to connect to the CLI and issue "[show errors](#show%20errors)", which will report the last captured faulty request and response for each frontend and backend, with all the necessary information to indicate precisely the first character of the input stream that was rejected. This is sometimes needed to prove to customers or to developers that a bug is present in their code. In this case it is often possible to relax the checks (but still keep the captures) using "option accept-invalid-http-request" or its equivalent for responses coming from the server "option accept-invalid-http-response". Please see the configuration manual for more details. ``` Example : ``` > show errors Total events captured on [13/Oct/2015:13:43:47.169] : 1 [13/Oct/2015:13:43:40.918] frontend HAProxyLocalStats (#2): invalid request backend <NONE> (#-1), server <NONE> (#-1), event #0 src 127.0.0.1:51981, session #0, session flags 0x00000080 HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000 HTTP chunk len 0 bytes, HTTP body len 0 bytes buffer flags 0x00808002, out 0 bytes, total 31 bytes pending 31 bytes, wrapping at 8040, error at position 13: 00000 GET /invalid request HTTP/1.1\r\n ``` ``` The output of "[show info](#show%20info)" on the CLI provides a number of useful information regarding the maximum connection rate ever reached, maximum SSL key rate ever reached, and in general all information which can help to explain temporary issues regarding CPU or memory usage. Example : > show info Name: HAProxy Version: 1.6-dev7-e32d18-17 Release_date: 2015/10/12 Nbproc: 1 Process_num: 1 Pid: 7949 Uptime: 0d 0h02m39s Uptime_sec: 159 Memmax_MB: 0 Ulimit-n: 120032 Maxsock: 120032 Maxconn: 60000 Hard_maxconn: 60000 CurrConns: 0 CumConns: 3 CumReq: 3 MaxSslConns: 0 CurrSslConns: 0 CumSslConns: 0 Maxpipes: 0 PipesUsed: 0 PipesFree: 0 ConnRate: 0 ConnRateLimit: 0 MaxConnRate: 1 SessRate: 0 SessRateLimit: 0 MaxSessRate: 1 SslRate: 0 SslRateLimit: 0 MaxSslRate: 0 SslFrontendKeyRate: 0 SslFrontendMaxKeyRate: 0 SslFrontendSessionReuse_pct: 0 SslBackendKeyRate: 0 SslBackendMaxKeyRate: 0 SslCacheLookups: 0 SslCacheMisses: 0 CompressBpsIn: 0 CompressBpsOut: 0 CompressBpsRateLim: 0 ZlibMemUsage: 0 MaxZlibMemUsage: 0 Tasks: 5 Run_queue: 1 Idle_pct: 100 node: wtap description: When an issue seems to randomly appear on a new version of HAProxy (eg: every second request is aborted, occasional crash, etc), it is worth trying to enable memory poisoning so that each call to malloc() is immediately followed by the filling of the memory area with a configurable byte. By default this byte is 0x50 (ASCII for 'P'), but any other byte can be used, including zero (which will have the same effect as a calloc() and which may make issues disappear). Memory poisoning is enabled on the command line using the "-dM" option. It slightly hurts performance and is not recommended for use in production. If an issue happens all the time with it or never happens when poisoning uses byte zero, it clearly means you've found a bug and you definitely need to report it. Otherwise if there's no clear change, the problem it is not related. When debugging some latency issues, it is important to use both strace and tcpdump on the local machine, and another tcpdump on the remote system. The reason for this is that there are delays everywhere in the processing chain and it is important to know which one is causing latency to know where to act. In practice, the local tcpdump will indicate when the input data come in. Strace will indicate when haproxy receives these data (using recv/recvfrom). Warning, openssl uses read()/write() syscalls instead of recv()/send(). Strace will also show when haproxy sends the data, and tcpdump will show when the system sends these data to the interface. Then the external tcpdump will show when the data sent are really received (since the local one only shows when the packets are queued). The benefit of sniffing on the local system is that strace and tcpdump will use the same reference clock. Strace should be used with "-tts200" to get complete timestamps and report large enough chunks of data to read them. Tcpdump should be used with "-nvvttSs0" to report full packets, real sequence numbers and complete timestamps. In practice, received data are almost always immediately received by haproxy (unless the machine has a saturated CPU or these data are invalid and not delivered). If these data are received but not sent, it generally is because the output buffer is saturated (ie: recipient doesn't consume the data fast enough). This can be confirmed by seeing that the polling doesn't notify of the ability to write on the output file descriptor for some time (it's often easier to spot in the strace output when the data finally leave and then roll back to see when the write event was notified). It generally matches an ACK received from the recipient, and detected by tcpdump. Once the data are sent, they may spend some time in the system doing nothing. Here again, the TCP congestion window may be limited and not allow these data to leave, waiting for an ACK to open the window. If the traffic is idle and the data take 40 ms or 200 ms to leave, it's a different issue (which is not an issue), it's the fact that the Nagle algorithm prevents empty packets from leaving immediately, in hope that they will be merged with subsequent data. HAProxy automatically disables Nagle in pure TCP mode and in tunnels. However it definitely remains enabled when forwarding an HTTP body (and this contributes to the performance improvement there by reducing the number of packets). Some HTTP non-compliant applications may be sensitive to the latency when delivering incomplete HTTP response messages. In this case you will have to enable "option http-no-delay" to disable Nagle in order to work around their design, keeping in mind that any other proxy in the chain may similarly be impacted. If tcpdump reports that data leave immediately but the other end doesn't see them quickly, it can mean there is a congested WAN link, a congested LAN with flow control enabled and preventing the data from leaving, or more commonly that HAProxy is in fact running in a virtual machine and that for whatever reason the hypervisor has decided that the data didn't need to be sent immediately. In virtualized environments, latency issues are almost always caused by the virtualization layer, so in order to save time, it's worth first comparing tcpdump in the VM and on the external components. Any difference has to be credited to the hypervisor and its accompanying drivers. When some TCP SACK segments are seen in tcpdump traces (using -vv), it always means that the side sending them has got the proof of a lost packet. While not seeing them doesn't mean there are no losses, seeing them definitely means the network is lossy. Losses are normal on a network, but at a rate where SACKs are not noticeable at the naked eye. If they appear a lot in the traces, it is worth investigating exactly what happens and where the packets are lost. HTTP doesn't cope well with TCP losses, which introduce huge latencies. The "netstat -i" command will report statistics per interface. An interface where the Rx-Ovr counter grows indicates that the system doesn't have enough resources to receive all incoming packets and that they're lost before being processed by the network driver. Rx-Drp indicates that some received packets were lost in the network stack because the application doesn't process them fast enough. This can happen during some attacks as well. Tx-Drp means that the output queues were full and packets had to be dropped. When using TCP it should be very rare, but will possibly indicate a saturated outgoing link. ``` 13. Security considerations ---------------------------- ``` HAProxy is designed to run with very limited privileges. The standard way to use it is to isolate it into a chroot jail and to drop its privileges to a non-root user without any permissions inside this jail so that if any future vulnerability were to be discovered, its compromise would not affect the rest of the system. In order to perform a chroot, it first needs to be started as a root user. It is pointless to build hand-made chroots to start the process there, these ones are painful to build, are never properly maintained and always contain way more bugs than the main file-system. And in case of compromise, the intruder can use the purposely built file-system. Unfortunately many administrators confuse "start as root" and "run as root", resulting in the uid change to be done prior to starting haproxy, and reducing the effective security restrictions. HAProxy will need to be started as root in order to : - adjust the file descriptor limits - bind to privileged port numbers - bind to a specific network interface - transparently listen to a foreign address - isolate itself inside the chroot jail - drop to another non-privileged UID HAProxy may require to be run as root in order to : - bind to an interface for outgoing connections - bind to privileged source ports for outgoing connections - transparently bind to a foreign address for outgoing connections Most users will never need the "run as root" case. But the "start as root" covers most usages. A safe configuration will have : - a chroot statement pointing to an empty location without any access permissions. This can be prepared this way on the UNIX command line : # mkdir /var/empty && chmod 0 /var/empty || echo "Failed" and referenced like this in the HAProxy configuration's global section : chroot /var/empty - both a uid/user and gid/group statements in the global section : user haproxy group haproxy - a stats socket whose mode, uid and gid are set to match the user and/or group allowed to access the CLI so that nobody may access it : stats socket /var/run/haproxy.stat uid hatop gid hatop mode 600 ```
programming_docs
angular Angular Documentation Angular Documentation ===================== Angular is an application-design framework and development platform for creating efficient and sophisticated single-page apps. These Angular docs help you learn and use the Angular framework and development platform, from your first application to optimizing complex single-page applications for enterprises. Tutorials and guides include downloadable examples to help you start your projects. Assumptions ----------- These docs assume that you are already familiar with [HTML](https://developer.mozilla.org/docs/Learn/HTML/Introduction_to_HTML "Learn HTML"), [CSS](https://developer.mozilla.org/docs/Learn/CSS/First_steps "Learn CSS"), [JavaScript](https://developer.mozilla.org/docs/Web/JavaScript/A_re-introduction_to_JavaScript "Learn JavaScript"), and some of the tools from the [latest standards](https://developer.mozilla.org/docs/Web/JavaScript/Language_Resources "Latest JavaScript standards"), such as [classes](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Classes "ES2015 Classes") and [modules](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/import "ES2015 Modules"). The code samples are written using [TypeScript](https://www.typescriptlang.org/ "TypeScript"). Most Angular code can be written with just the latest JavaScript, using [types](https://www.typescriptlang.org/docs/handbook/classes.html "TypeScript Types") for dependency injection, and using [decorators](https://www.typescriptlang.org/docs/handbook/decorators.html "Decorators") for metadata. Feedback -------- We want to hear from you. [Report problems or submit suggestions for future docs](https://github.com/angular/angular/issues/new/choose "Angular GitHub repository new issue form"). Contribute to Angular docs by creating [pull requests](https://github.com/angular/angular/pulls "Angular Github pull requests") on the Angular GitHub repository. See [Contributing to Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md "Contributing guide") for information about submission guidelines. Our community values respectful, supportive communication. Please consult and adhere to the [Code of Conduct](https://github.com/angular/code-of-conduct/blob/main/CODE_OF_CONDUCT.md "Contributor code of conduct"). angular Extended Diagnostics Extended Diagnostics ==================== There are many coding patterns that are technically valid to the compiler or runtime, but which may have complex nuances or caveats. These patterns may not have the intended effect expected by a developer, which often leads to bugs. The Angular compiler includes "extended diagnostics" which identify many of these patterns, in order to warn developers about the potential issues and enforce common best practices within a codebase. Diagnostics ----------- Currently, Angular supports the following extended diagnostics: * [NG8101 - `invalidBananaInBox`](https://angular.io/extended-diagnostics/NG8101) * [NG8102 - `nullishCoalescingNotNullable`](https://angular.io/extended-diagnostics/NG8102) * [NG8103 - `missingControlFlowDirective`](https://angular.io/extended-diagnostics/NG8103) * [NG8105 - `missingNgForOfLet`](https://angular.io/extended-diagnostics/NG8105) * [NG8106 - `suffixNotSupported`](https://angular.io/extended-diagnostics/NG8106) * [NG8104 - `textAttributeNotBinding`](https://angular.io/extended-diagnostics/NG8104) Configuration ------------- Extended diagnostics are warnings by default and do not block compilation. Each diagnostic can be configured as either: * `warning` (default) - The compiler emits the diagnostic as a warning but does not block compilation. The compiler will still exit with status code 0, even if warnings are emitted. * `error` - The compiler emits the diagnostic as an error and fails the compilation. The compiler will exit with a non-zero status code if one or more errors are emitted. * `suppress` - The compiler does *not* emit the diagnostic at all. Check severity can be configured in the project's `tsconfig.json` file: ``` { "angularCompilerOptions": { "extendedDiagnostics": { // The categories to use for specific diagnostics. "checks": { // Maps check name to its category. "invalidBananaInBox": "suppress" }, // The category to use for any diagnostics not listed in `checks` above. "defaultCategory": "error" } } } ``` The `checks` field maps the name of individual diagnostics to their associated category. See [Diagnostics](https://angular.io/extended-diagnostics#diagnostics) for a complete list of extended diagnostics and the name to use for configuring them. The `defaultCategory` field is used for any diagnostics that are not explicitly listed under `checks`. If not set, such diagnostics will be treated as `warning`. Extended diagnostics will emit when [`strictTemplates`](guide/template-typecheck#strict-mode) is enabled. This is required to allow the compiler to better understand Angular template types and provide accurate and meaningful diagnostics. Semantic Versioning ------------------- The Angular team intends to add or enable new extended diagnostics in **minor** versions of Angular (see [semver](https://docs.npmjs.com/about-semantic-versioning)). This means that upgrading Angular may show new warnings in your existing codebase. This enables the team to deliver features more quickly and to make extended diagnostics more accessible to developers. However, setting `"defaultCategory": "error"` will promote such warnings to hard errors. This can cause a minor version upgrade to introduce compilation errors, which may be seen as a semver non-compliant breaking change. Any new diagnostics can be suppressed or demoted to warnings via the above [configuration](https://angular.io/extended-diagnostics#configuration), so the impact of a new diagnostic should be minimal to projects that treat extended diagnostics as errors by default. Defaulting to error is a very powerful tool; just be aware of this semver caveat when deciding if `error` is the right default for your project. New Diagnostics --------------- The Angular team is always open to suggestions about new diagnostics that could be added. Extended diagnostics should generally: * Detect a common, non-obvious developer mistake with Angular templates * Clearly articulate why this pattern can lead to bugs or unintended behavior * Suggest one or more clear solutions * Have a low, preferably zero, false-positive rate * Apply to the vast majority of Angular applications (not specific to an unofficial library) * Improve program correctness or performance (not style, that responsibility falls to a linter) If you have an idea for a compiler check which fits these criteria, consider filing a [feature request](https://github.com/angular/angular/issues/new?template=2-feature-request.yaml). Last reviewed on Mon Feb 28 2022 angular Getting started with Angular Getting started with Angular ============================ Welcome to Angular! This tutorial introduces you to the essentials of Angular by walking you through building an e-commerce site with a catalog, shopping cart, and check-out form. To help you get started right away, this tutorial uses a ready-made application that you can examine and modify interactively on [StackBlitz](https://stackblitz.com) —without having to [set up a local work environment](guide/setup-local "Setup guide"). StackBlitz is a browser-based development environment where you can create, save, and share projects using a variety of technologies. Prerequisites ------------- To get the most out of this tutorial, you should already have a basic understanding of the following. * [HTML](https://developer.mozilla.org/docs/Learn/HTML "Learning HTML: Guides and tutorials") * [JavaScript](https://developer.mozilla.org/docs/Web/JavaScript "JavaScript") * [TypeScript](https://www.typescriptlang.org/ "The TypeScript language") Take a tour of the example application -------------------------------------- You build Angular applications with components. Components define areas of responsibility in the UI that let you reuse sets of UI functionality. A component consists of three things: | Component Part | Details | | --- | --- | | A component class | Handles data and functionality | | An HTML template | Determines the UI | | Component-specific styles | Define the look and feel | This guide demonstrates building an application with the following components: | Components | Details | | --- | --- | | `<app-root>` | The first component to load and the container for the other components | | `<app-top-bar>` | The store name and checkout button | | `<app-product-list>` | The product list | | `<app-product-alerts>` | A component that contains the application's alerts | For more information about components, see [Introduction to Components](guide/architecture-components "Introduction to Components and Templates"). Create the sample project ------------------------- To create the sample project, generate the ready-made sample project in StackBlitz. To save your work: 1. Log into StackBlitz. 2. Fork the project you generated. 3. Save periodically. In StackBlitz, the preview pane on the right shows the starting state of the example application. The preview features two areas: * A top bar with the store name, `My Store`, and a checkout button * A header for a product list, `Products` The project section on the left shows the source files that make up the application, including the infrastructure and configuration files. When you generate the StackBlitz example applications that accompany the tutorials, StackBlitz creates the starter files and mock data for you. The files you use throughout the tutorial are in the `src` folder. For more information on how to use StackBlitz, see the [StackBlitz documentation](https://developer.stackblitz.com/docs/platform). Create the product list ----------------------- In this section, you'll update the application to display a list of products. You'll use predefined product data from the `products.ts` file and methods from the `product-list.component.ts` file. This section guides you through editing the HTML, also known as the template. 1. In the `product-list` folder, open the template file `product-list.component.html`. 2. Add an `*[ngFor](api/common/ngfor)` structural directive on a `<div>`, as follows. ``` <h2>Products</h2> <div *ngFor="let product of products"> </div> ``` With `*[ngFor](api/common/ngfor)`, the `<div>` repeats for each product in the list. Structural directives shape or reshape the DOM's structure, by adding, removing, and manipulating elements. For more information about structural directives, see [Structural directives](guide/structural-directives). 3. Inside the `<div>`, add an `<h3>` and `{{ product.name }}`. The `{{ product.name }}` statement is an example of Angular's interpolation syntax. Interpolation `{{ }}` lets you render the property value as text. ``` <h2>Products</h2> <div *ngFor="let product of products"> <h3> {{ product.name }} </h3> </div> ``` The preview pane updates to display the name of each product in the list. 4. To make each product name a link to product details, add the `<a>` element around `{{ product.name }}`. 5. Set the title to be the product's name by using the property binding `[ ]` syntax, as follows: ``` <h2>Products</h2> <div *ngFor="let product of products"> <h3> <a [title]="product.name + ' details'"> {{ product.name }} </a> </h3> </div> ``` In the preview pane, hover over a product name to see the bound name property value, which is the product name plus the word "details". Property binding `[ ]` lets you use the property value in a template expression. 6. Add the product descriptions. On a `<p>` element, use an `*[ngIf](api/common/ngif)` directive so that Angular only creates the `<p>` element if the current product has a description. ``` <h2>Products</h2> <div *ngFor="let product of products"> <h3> <a [title]="product.name + ' details'"> {{ product.name }} </a> </h3> <p *ngIf="product.description"> Description: {{ product.description }} </p> </div> ``` The application now displays the name and description of each product in the list. Notice that the final product does not have a description paragraph. Angular doesn't create the `<p>` element because the product's description property is empty. 7. Add a button so users can share a product. Bind the button's `click` event to the `share()` method in `product-list.component.ts`. Event binding uses a set of parentheses, `( )`, around the event, as in the `(click)` event on the `<button>` element. ``` <h2>Products</h2> <div *ngFor="let product of products"> <h3> <a [title]="product.name + ' details'"> {{ product.name }} </a> </h3> <p *ngIf="product.description"> Description: {{ product.description }} </p> <button type="button" (click)="share()"> Share </button> </div> ``` Each product now has a **Share** button. Clicking the **Share** button triggers an alert that states, "The product has been shared!". In editing the template, you have explored some of the most popular features of Angular templates. For more information, see [Introduction to components and templates](guide/architecture-components#template-syntax "Template Syntax"). Pass data to a child component ------------------------------ Currently, the product list displays the name and description of each product. The `ProductListComponent` also defines a `products` property that contains imported data for each product from the `products` array in `products.ts`. The next step is to create a new alert feature that uses product data from the `ProductListComponent`. The alert checks the product's price, and, if the price is greater than $700, displays a **Notify Me** button that lets users sign up for notifications when the product goes on sale. This section walks you through creating a child component, `ProductAlertsComponent`, that can receive data from its parent component, `ProductListComponent`. 1. Click on the plus sign above the current terminal to create a new terminal to run the command to generate the component. 2. In the new terminal, generate a new component named `product-alerts` by running the following command: ``` `ng generate component product-alerts` ``` The generator creates starter files for the three parts of the component: * `product-alerts.component.ts` * `product-alerts.component.html` * `product-alerts.component.css` 3. Open `product-alerts.component.ts`. The `@[Component](api/core/component)()` decorator indicates that the following class is a component. `@[Component](api/core/component)()` also provides metadata about the component, including its selector, templates, and styles. ``` @Component({ selector: 'app-product-alerts', templateUrl: './product-alerts.component.html', styleUrls: ['./product-alerts.component.css'] }) export class ProductAlertsComponent { } ``` Key features in the `@[Component](api/core/component)()` are as follows: * The `selector`, `app-product-alerts`, identifies the component. By convention, Angular component selectors begin with the prefix `app-`, followed by the component name. * The template and style filenames reference the component's HTML and CSS * The `@[Component](api/core/component)()` definition also exports the class, `ProductAlertsComponent`, which handles functionality for the component 4. To set up `ProductAlertsComponent` to receive product data, first import `[Input](api/core/input)` from `@angular/core`. ``` import { Component, Input } from '@angular/core'; import { Product } from '../products'; ``` 5. In the `ProductAlertsComponent` class definition, define a property named `product` with an `@[Input](api/core/input)()` decorator. The `@[Input](api/core/input)()` decorator indicates that the property value passes in from the component's parent, `ProductListComponent`. ``` export class ProductAlertsComponent { @Input() product!: Product; } ``` 6. Open `product-alerts.component.html` and replace the placeholder paragraph with a **Notify Me** button that appears if the product price is over $700. ``` <p *ngIf="product && product.price > 700"> <button type="button">Notify Me</button> </p> ``` 7. The generator automatically added the `ProductAlertsComponent` to the `AppModule` to make it available to other components in the application. ``` import { ProductAlertsComponent } from './product-alerts/product-alerts.component'; @NgModule({ declarations: [ AppComponent, TopBarComponent, ProductListComponent, ProductAlertsComponent, ], ``` 8. Finally, to display `ProductAlertsComponent` as a child of `ProductListComponent`, add the `<app-product-alerts>` element to `product-list.component.html`. Pass the current product as input to the component using property binding. ``` <button type="button" (click)="share()"> Share </button> <app-product-alerts [product]="product"> </app-product-alerts> ``` The new product alert component takes a product as input from the product list. With that input, it shows or hides the **Notify Me** button, based on the price of the product. The Phone XL price is over $700, so the **Notify Me** button appears on that product. Pass data to a parent component ------------------------------- To make the **Notify Me** button work, the child component needs to notify and pass the data to the parent component. The `ProductAlertsComponent` needs to emit an event when the user clicks **Notify Me** and the `ProductListComponent` needs to respond to the event. > In new components, the Angular Generator includes an empty `constructor()`, the `[OnInit](api/core/oninit)` interface, and the `ngOnInit()` method. Since these steps don't use them, the following code examples omit them for brevity. > > 1. In `product-alerts.component.ts`, import `[Output](api/core/output)` and `[EventEmitter](api/core/eventemitter)` from `@angular/core`. ``` import { Component, Input, Output, EventEmitter } from '@angular/core'; import { Product } from '../products'; ``` 2. In the component class, define a property named `notify` with an `@[Output](api/core/output)()` decorator and an instance of `[EventEmitter](api/core/eventemitter)()`. Configuring `ProductAlertsComponent` with an `@[Output](api/core/output)()` allows the `ProductAlertsComponent` to emit an event when the value of the `notify` property changes. ``` export class ProductAlertsComponent { @Input() product: Product | undefined; @Output() notify = new EventEmitter(); } ``` 3. In `product-alerts.component.html`, update the **Notify Me** button with an event binding to call the `notify.emit()` method. ``` <p *ngIf="product && product.price > 700"> <button type="button" (click)="notify.emit()">Notify Me</button> </p> ``` 4. Define the behavior that happens when the user clicks the button. The parent, `ProductListComponent` —not the `ProductAlertsComponent`— acts when the child raises the event. In `product-list.component.ts`, define an `onNotify()` method, similar to the `share()` method. ``` export class ProductListComponent { products = [...products]; share() { window.alert('The product has been shared!'); } onNotify() { window.alert('You will be notified when the product goes on sale'); } } ``` 5. Update the `ProductListComponent` to receive data from the `ProductAlertsComponent`. In `product-list.component.html`, bind `<app-product-alerts>` to the `onNotify()` method of the product list component. `<app-product-alerts>` is what displays the **Notify Me** button. ``` <button type="button" (click)="share()"> Share </button> <app-product-alerts [product]="product" (notify)="onNotify()"> </app-product-alerts> ``` 6. Click the **Notify Me** button to trigger an alert which reads, "You will be notified when the product goes on sale". For more information on communication between components, see [Component Interaction](guide/component-interaction "Component interaction"). What's next ----------- In this section, you've created an application that iterates through data and features components that communicate with each other. To continue exploring Angular and developing this application: * Continue to [In-app navigation](https://angular.io/start/start-routing "Getting started: In-app navigation") to create a product details page. * Skip ahead to [Deployment](https://angular.io/start/start-deployment "Getting started: Deployment") to move to local development, or deploy your application to Firebase or your own server. Last reviewed on Mon Feb 28 2022
programming_docs
angular CLI Overview and Command Reference CLI Overview and Command Reference ================================== The Angular CLI is a command-line interface tool that you use to initialize, develop, scaffold, and maintain Angular applications directly from a command shell. Installing Angular CLI ---------------------- Major versions of Angular CLI follow the supported major version of Angular, but minor versions can be released separately. Install the CLI using the `npm` package manager: ``` npm install -g @angular/cli ``` For details about changes between versions, and information about updating from previous releases, see the Releases tab on GitHub: <https://github.com/angular/angular-cli/releases> Basic workflow -------------- Invoke the tool on the command line through the `ng` executable. Online help is available on the command line. Enter the following to list commands or options for a given command (such as [generate](https://angular.io/cli/generate)) with a short description. ``` ng help ng generate --help ``` To create, build, and serve a new, basic Angular project on a development server, go to the parent directory of your new workspace use the following commands: ``` ng new my-first-project cd my-first-project ng serve ``` In your browser, open http://localhost:4200/ to see the new application run. When you use the [ng serve](https://angular.io/cli/serve) command to build an application and serve it locally, the server automatically rebuilds the application and reloads the page when you change any of the source files. > When you run `ng new my-first-project` a new folder, named `my-first-project`, will be created in the current working directory. Since you want to be able to create files inside that folder, make sure you have sufficient rights in the current working directory before running the command. > > If the current working directory is not the right place for your project, you can change to a more appropriate directory by running `cd <path-to-other-directory>`. > > Workspaces and project files ---------------------------- The [ng new](https://angular.io/cli/new) command creates an *Angular workspace* folder and generates a new application skeleton. A workspace can contain multiple applications and libraries. The initial application created by the [ng new](https://angular.io/cli/new) command is at the top level of the workspace. When you generate an additional application or library in a workspace, it goes into a `projects/` subfolder. A newly generated application contains the source files for a root module, with a root component and template. Each application has a `src` folder that contains the logic, data, and assets. You can edit the generated files directly, or add to and modify them using CLI commands. Use the [ng generate](https://angular.io/cli/generate) command to add new files for additional components and services, and code for new pipes, directives, and so on. Commands such as [add](https://angular.io/cli/add) and [generate](https://angular.io/cli/generate), which create or operate on applications and libraries, must be executed from within a workspace or project folder. * See more about the [Workspace file structure](guide/file-structure). ### Workspace and project configuration A single workspace configuration file, `angular.json`, is created at the top level of the workspace. This is where you can set per-project defaults for CLI command options, and specify configurations to use when the CLI builds a project for different targets. The [ng config](https://angular.io/cli/config) command lets you set and retrieve configuration values from the command line, or you can edit the `angular.json` file directly. > **NOTE**: Option names in the configuration file must use [camelCase](guide/glossary#case-types), while option names supplied to commands must be dash-case. > > * See more about [Workspace Configuration](guide/workspace-config). CLI command-language syntax --------------------------- Command syntax is shown as follows: `ng` [*optional-arg*] `[options]` * Most commands, and some options, have aliases. Aliases are shown in the syntax statement for each command. * Option names are prefixed with a double dash (`--`) characters. Option aliases are prefixed with a single dash (`-`) character. Arguments are not prefixed. For example: ``` ng build my-app -c production ``` * Typically, the name of a generated artifact can be given as an argument to the command or specified with the `--name` option. * Arguments and option names must be given in [dash-case](guide/glossary#case-types). For example: `--my-option-name` ### Boolean options Boolean options have two forms: `--this-option` sets the flag to `true`, `--no-this-option` sets it to `false`. If neither option is supplied, the flag remains in its default state, as listed in the reference documentation. ### Array options Array options can be provided in two forms: `--option value1 value2` or `--option value1 --option value2`. ### Relative paths Options that specify files can be given as absolute paths, or as paths relative to the current working directory, which is generally either the workspace or project root. ### Schematics The [ng generate](https://angular.io/cli/generate) and [ng add](https://angular.io/cli/add) commands take, as an argument, the artifact or library to be generated or added to the current project. In addition to any general options, each artifact or library defines its own options in a *schematic*. Schematic options are supplied to the command in the same format as immediate command options. Command Overview ---------------- | Command | Alias | Description | | --- | --- | --- | | [`add`](https://angular.io/cli/add) | | Adds support for an external library to your project. | | [`analytics`](https://angular.io/cli/analytics) | | Configures the gathering of Angular CLI usage metrics. | | [`build`](https://angular.io/cli/build) | `b` | Compiles an Angular application or library into an output directory named dist/ at the given output path. | | [`cache`](https://angular.io/cli/cache) | | Configure persistent disk cache and retrieve cache statistics. | | [`completion`](https://angular.io/cli/completion) | | Set up Angular CLI autocompletion for your terminal. | | [`config`](https://angular.io/cli/config) | | Retrieves or sets Angular configuration values in the angular.json file for the workspace. | | [`deploy`](https://angular.io/cli/deploy) | | Invokes the deploy builder for a specified project or for the default project in the workspace. | | [`doc`](https://angular.io/cli/doc) | `d` | Opens the official Angular documentation (angular.io) in a browser, and searches for a given keyword. | | [`e2e`](https://angular.io/cli/e2e) | `e` | Builds and serves an Angular application, then runs end-to-end tests. | | [`extract-i18n`](https://angular.io/cli/extract-i18n) | | Extracts i18n messages from source code. | | [`generate`](https://angular.io/cli/generate) | `g` | Generates and/or modifies files based on a schematic. | | [`lint`](https://angular.io/cli/lint) | | Runs linting tools on Angular application code in a given project folder. | | [`new`](https://angular.io/cli/new) | `n` | Creates a new Angular workspace. | | [`run`](https://angular.io/cli/run) | | Runs an Architect target with an optional custom builder configuration defined in your project. | | [`serve`](https://angular.io/cli/serve) | `s` | Builds and serves your application, rebuilding on file changes. | | [`test`](https://angular.io/cli/test) | `t` | Runs unit tests in a project. | | [`update`](https://angular.io/cli/update) | | Updates your workspace and its dependencies. See <https://update.angular.io/>. | | [`version`](https://angular.io/cli/version) | `v` | Outputs Angular CLI version. | angular Errors List Errors List =========== * [`NG0100: Expression Changed After Checked`](https://angular.io/errors/NG0100) * [`NG01003: Wrong Async Validator Return Type`](https://angular.io/errors/NG01003) * [`NG01203: Missing value accessor`](https://angular.io/errors/NG01203) * [`NG0200: Circular Dependency in DI`](https://angular.io/errors/NG0200) * [`NG0201: No Provider Found`](https://angular.io/errors/NG0201) * [`NG0203: `inject()` must be called from an injection context`](https://angular.io/errors/NG0203) * [`NG0209: Invalid multi provider`](https://angular.io/errors/NG0209) * [`NG02200: Missing Iterable Differ`](https://angular.io/errors/NG02200) * [`NG0300: Selector Collision`](https://angular.io/errors/NG0300) * [`NG0301: Export Not Found`](https://angular.io/errors/NG0301) * [`NG0302: Pipe Not Found`](https://angular.io/errors/NG0302) * [`NG0403: Bootstrapped NgModule doesn't specify which component to initialize`](https://angular.io/errors/NG0403) * [`NG0910: Unsafe bindings on an iframe element`](https://angular.io/errors/NG0910) * [`NG1001: Argument Not Literal`](https://angular.io/errors/NG1001) * [`NG2003: Missing Token`](https://angular.io/errors/NG2003) * [`NG2009: Invalid Shadow DOM selector`](https://angular.io/errors/NG2009) * [`NG3003: Import Cycle Detected`](https://angular.io/errors/NG3003) * [`NG6100: NgModule.id Set to module.id anti-pattern`](https://angular.io/errors/NG6100) * [`NG6999: Invalid metadata`](https://angular.io/errors/NG6999) * [`NG8001: Invalid Element`](https://angular.io/errors/NG8001) * [`NG8002: Invalid Attribute`](https://angular.io/errors/NG8002) * [`NG8003: Missing Reference Target`](https://angular.io/errors/NG8003) angular Merge translations into the application Merge translations into the application ======================================= To merge the completed translations into your project, complete the following actions 1. Use the [Angular CLI](cli "CLI Overview and Command Reference | Angular") to build a copy of the distributable files of your project 2. Use the `"localize"` option to replace all of the i18n messages with the valid translations and build a localized variant application. A variant application is a complete a copy of the distributable files of your application translated for a single locale. After you merge the translations, serve each distributable copy of the application using server-side language detection or different subdirectories. > For more information about how to serve each distributable copy of the application, see [deploying multiple locales](i18n-common-deploy "Deploy multiple locales | Angular"). > > For a compile-time translation of the application, the build process uses [ahead-of-time (AOT) compilation](glossary#ahead-of-time-aot-compilation "ahead-of-time (AOT) compilation - Glossary | Angular") to produce a small, fast, ready-to-run application. > For a detailed explanation of the build process, see [Building and serving Angular apps](build "Building and serving Angular apps | Angular"). The build process works for translation files in the `.xlf` format or in another format that Angular understands, such as `.xtb`. For more information about translation file formats used by Angular, see [Change the source language file format](i18n-common-translation-files#change-the-source-language-file-format "Change the source language file format - Work with translation files | Angular") > > To build a separate distributable copy of the application for each locale, [define the locales in the build configuration](i18n-common-merge#define-locales-in-the-build-configuration "Define locales in the build configuration - Merge translations into the application | Angular") in the [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file of your project. This method shortens the build process by removing the requirement to perform a full application build for each locale. To [generate application variants for each locale](i18n-common-merge#generate-application-variants-for-each-locale "Generate application variants for each locale - Merge translations into the application | Angular"), use the `"localize"` option in the [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file. Also, to [build from the command line](i18n-common-merge#build-from-the-command-line "Build from the command line - Merge translations into the application | Angular"), use the [`build`](cli/build "ng build | CLI | Angular") [Angular CLI](cli "CLI Overview and Command Reference | Angular") command with the `--localize` option. > Optionally, [apply specific build options for just one locale](i18n-common-merge#apply-specific-build-options-for-just-one-locale "Apply specific build options for just one locale - Merge translations into the application | Angular") for a custom locale configuration. > > Define locales in the build configuration ----------------------------------------- Use the `i18n` project option in the [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file of your project to define locales for a project. The following sub-options identify the source language and tell the compiler where to find supported translations for the project. | Suboption | Details | | --- | --- | | `sourceLocale` | The locale you use within the application source code (`en-US` by default) | | `locales` | A map of locale identifiers to translation files | ### `angular.json` for `en-US` and `fr` example For example, the following excerpt of an [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file sets the source locale to `en-US` and provides the path to the French (`fr`) locale translation file. ``` "projects": { "angular.io-example": { // ... "i18n": { "sourceLocale": "en-US", "locales": { "fr": { "translation": "src/locale/messages.fr.xlf", // ... } } }, "architect": { // ... } } } } ``` Generate application variants for each locale --------------------------------------------- To use your locale definition in the build configuration, use the `"localize"` option in the [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file to tell the CLI which locales to generate for the build configuration. * Set `"localize"` to `true` for all the locales previously defined in the build configuration. * Set `"localize"` to an array of a subset of the previously defined locale identifiers to build only those locale versions. * Set `"localize"` to `false` to disable localization and not generate any locale-specific versions. > **NOTE**: [Ahead-of-time (AOT) compilation](glossary#ahead-of-time-aot-compilation "ahead-of-time (AOT) compilation - Glossary | Angular") is required to localize component templates. > > If you changed this setting, set `"aot"` to `true` in order to use AOT. > > > Due to the deployment complexities of i18n and the need to minimize rebuild time, the development server only supports localizing a single locale at a time. If you set the `"localize"` option to `true`, define more than one locale, and use `ng serve`; then an error occurs. If you want to develop against a specific locale, set the `"localize"` option to a specific locale. For example, for French (`fr`), specify `"localize": ["fr"]`. > > The CLI loads and registers the locale data, places each generated version in a locale-specific directory to keep it separate from other locale versions, and puts the directories within the configured `outputPath` for the project. For each application variant the `lang` attribute of the `html` element is set to the locale. The CLI also adjusts the HTML base HREF for each version of the application by adding the locale to the configured `baseHref`. Set the `"localize"` property as a shared configuration to effectively inherit for all the configurations. Also, set the property to override other configurations. ### `angular.json` include all locales from build example The following example displays the `"localize"` option set to `true` in the [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file, so that all locales defined in the build configuration are built. ``` "build": { "builder": "@angular-devkit/build-angular:browser", "options": { "localize": true, // ... }, ``` Build from the command line --------------------------- Also, use the `--localize` option with the [`ng build`](cli/build "ng build | CLI | Angular") command and your existing `production` configuration. The CLI builds all locales defined in the build configuration. If you set the locales in build configuration, it is similar to when you set the `"localize"` option to `true`. > For more information about how to set the locales, see [Generate application variants for each locale](i18n-common-merge#generate-application-variants-for-each-locale "Generate application variants for each locale - Merge translations into the application | Angular"). > > ``` ng build --localize ``` Apply specific build options for just one locale ------------------------------------------------ To apply specific build options to only one locale, specify a single locale to create a custom locale-specific configuration. > Use the [Angular CLI](cli "CLI Overview and Command Reference | Angular") development server (`ng serve`) with only a single locale. > > ### build for French example The following example displays a custom locale-specific configuration using a single locale. ``` "build": { // ... "configurations": { // ... "fr": { "localize": ["fr"] } }, // ... }, "serve": { "builder": "@angular-devkit/build-angular:dev-server", "configurations": { // ... "fr": { "browserTarget": "angular.io-example:build:development,fr" } }, // ... }, // ... } ``` Pass this configuration to the `ng serve` or `ng build` commands. The following code example displays how to serve the French language file. ``` ng serve --configuration=fr ``` For production builds, use configuration composition to run both configurations. ``` ng build --configuration=production,fr ``` ``` "architect": { "build": { "builder": "@angular-devkit/build-angular:browser", "options": { // ... }, "configurations": { // ... "fr": { "localize": ["fr"] } }, // ... }, "serve": { "builder": "@angular-devkit/build-angular:dev-server", "configurations": { "production": { "browserTarget": "angular.io-example:build:production" }, // ... "fr": { "browserTarget": "angular.io-example:build:development,fr" } }, // ... }, // ... } ``` Report missing translations --------------------------- When a translation is missing, the build succeeds but generates a warning such as `Missing translation for message "{translation_text}"`. To configure the level of warning that is generated by the Angular compiler, specify one of the following levels. | Warning level | Details | Output | | --- | --- | --- | | `error` | Throw an error and the build fails | n/a | | `ignore` | Do nothing | n/a | | `warning` | Displays the default warning in the console or shell | `Missing translation for message "{translation_text}"` | Specify the warning level in the `options` section for the `build` target of your [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file. ### `angular.json` `error` warning example The following example displays how to set the warning level to `error`. ``` "build": { "builder": "@angular-devkit/build-angular:browser", "options": { // ... "i18nMissingTranslation": "error" }, ``` > When you compile your Angular project into an Angular application, the instances of the `i18n` attribute are replaced with instances of the [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string. This means that your Angular application is translated after compilation. This also means that you can create localized versions of your Angular application without re-compiling your entire Angular project for each locale. > > When you translate your Angular application, the *translation transformation* replaces and reorders the parts (static strings and expressions) of the template literal string with strings from a collection of translations. For more information, see [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular"). > > > > **tldr;** > > > > Compile once, then translate for each locale. > > > > > > What's next ----------- * [Deploy multiple locales](i18n-common-deploy "Deploy multiple locales | Angular") Last reviewed on Mon Feb 28 2022
programming_docs
angular Prepare component for translation Prepare component for translation ================================= To prepare your project for translation, complete the following actions. * Use the `i18n` attribute to mark text in component templates * Use the `i18n-` attribute to mark attribute text strings in component templates * Use the `[$localize](../api/localize/init/%24localize)` tagged message string to mark text strings in component code Mark text in component template ------------------------------- In a component template, the i18n metadata is the value of the `i18n` attribute. ``` <element i18n="{i18n_metadata}">{string_to_translate}</element> ``` Use the `i18n` attribute to mark a static text message in your component templates for translation. Place it on every element tag that contains fixed text you want to translate. > The `i18n` attribute is a custom attribute that the Angular tools and compilers recognize. > > ### `i18n` example The following `<h1>` tag displays a simple English language greeting, "Hello i18n!". ``` <h1>Hello i18n!</h1> ``` To mark the greeting for translation, add the `i18n` attribute to the `<h1>` tag. ``` <h1 i18n>Hello i18n!</h1> ``` ### Translate inline text without HTML element Use the `[<ng-container>](../api/core/ng-container)` element to associate a translation behavior for specific text without changing the way text is displayed. > Each HTML element creates a new DOM element. To avoid creating a new DOM element, wrap the text in an `[<ng-container>](../api/core/ng-container)` element. The following example shows the `[<ng-container>](../api/core/ng-container)` element transformed into a non-displayed HTML comment. > > > ``` > <ng-container i18n>I don't output any element</ng-container> > ``` > Mark element attributes for translations ---------------------------------------- In a component template, the i18n metadata is the value of the `i18n-{attribute_name}` attribute. ``` <element i18n-{attribute_name}="{i18n_metadata}" {attribute_name}="{attribute_value}" /> ``` The attributes of HTML elements include text that should be translated along with the rest of the displayed text in the component template. Use `i18n-{attribute_name}` with any attribute of any element and replace `{attribute_name}` with the name of the attribute. Use the following syntax to assign a meaning, description, and custom ID. ``` i18n-{attribute_name}="{meaning}|{description}@@{id}" ``` ### `i18n-title` example To translate the title of an image, review this example. The following example displays an image with a `title` attribute. ``` <img [src]="logo" title="Angular logo" alt="Angular logo"> ``` To mark the title attribute for translation, complete the following action. 1. Add the `i18n-title` attribute The following example displays how to mark the `title` attribute on the `[img](../api/common/ngoptimizedimage)` tag by adding `i18n-title`. ``` <img [src]="logo" i18n-title title="Angular logo" alt="Angular logo"/> ``` Mark text in component code --------------------------- In component code, the translation source text and the metadata are surrounded by backtick (```) characters. Use the [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string to mark a string in your code for translation. ``` $localize `string_to_translate`; ``` The i18n metadata is surrounded by colon (`:`) characters and prepends the translation source text. ``` $localize `:{i18n_metadata}:string_to_translate` ``` ### Include interpolated text Include [interpolations](glossary#interpolation "interpolation - Glossary | Angular") in a [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string. ``` $localize `string_to_translate ${variable_name}`; ``` ### Name the interpolation placeholder ``` $localize `string_to_translate ${variable_name}:placeholder_name:`; ``` i18n metadata for translation ----------------------------- ``` {meaning}|{description}@@{custom_id} ``` The following parameters provide context and additional information to reduce confusion for your translator. | Metadata parameter | Details | | --- | --- | | Custom ID | Provide a custom identifier | | Description | Provide additional information or context | | Meaning | Provide the meaning or intent of the text within the specific context | For additional information about custom IDs, see [Manage marked text with custom IDs](i18n-optional-manage-marked-text "Manage marked text with custom IDs | Angular"). ### Add helpful descriptions and meanings To translate a text message accurately, provide additional information or context for the translator. Add a *description* of the text message as the value of the `i18n` attribute or [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string. The following example shows the value of the `i18n` attribute. ``` <h1 i18n="An introduction header for this sample">Hello i18n!</h1> ``` The following example shows the value of the [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string with a description. ``` $localize `:An introduction header for this sample:Hello i18n!`; ``` The translator may also need to know the meaning or intent of the text message within this particular application context, in order to translate it the same way as other text with the same meaning. Start the `i18n` attribute value with the *meaning* and separate it from the *description* with the `|` character: `{meaning}|{description}`. #### `h1` example For example, you may want to specify that the `<h1>` tag is a site header that you need translated the same way, whether it is used as a header or referenced in another section of text. The following example shows how to specify that the `<h1>` tag must be translated as a header or referenced elsewhere. ``` <h1 i18n="site header|An introduction header for this sample">Hello i18n!</h1> ``` The result is any text marked with `site header`, as the *meaning* is translated exactly the same way. The following code example shows the value of the [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string with a meaning and a description. ``` $localize `:site header|An introduction header for this sample:Hello i18n!`; ``` The Angular extraction tool generates a translation unit entry for each `i18n` attribute in a template. The Angular extraction tool assigns each translation unit a unique ID based on the *meaning* and *description*. > For more information about the Angular extraction tool, see [Work with translation files](i18n-common-translation-files "Work with translation files | Angular"). > > The same text elements with different *meanings* are extracted with different IDs. For example, if the word "right" uses the following two definitions in two different locations, the word is translated differently and merged back into the application as different translation entries. * `correct` as in "you are right" * `direction` as in "turn right" If the same text elements meet the following conditions, the text elements are extracted only once and use the same ID. * Same meaning or definition * Different descriptions That one translation entry is merged back into the application wherever the same text elements appear. ICU expressions --------------- ICU expressions help you mark alternate text in component templates to meet conditions. An ICU expression includes a component property, an ICU clause, and the case statements surrounded by open curly brace (`{`) and close curly brace (`}`) characters. ``` { component_property, icu_clause, case_statements } ``` The component property defines the variable An ICU clause defines the type of conditional text. | ICU clause | Details | | --- | --- | | [`plural`](i18n-common-prepare#mark-plurals "Mark plurals - Prepare component for translation | Angular") | Mark the use of plural numbers | | [`select`](i18n-common-prepare#mark-alternates-and-nested-expressions "Mark alternates and nested expressions - Prepare templates for translation | Angular") | Mark choices for alternate text based on your defined string values | To simplify translation, use International Components for Unicode clauses (ICU clauses) with regular expressions. > The ICU clauses adhere to the [ICU Message Format](https://unicode-org.github.io/icu/userguide/format_parse/messages "ICU Message Format - ICU Documentation | Unicode | GitHub") specified in the [CLDR pluralization rules](http://cldr.unicode.org/index/cldr-spec/plural-rules "Plural Rules | CLDR - Unicode Common Locale Data Repository | Unicode"). > > ### Mark plurals Different languages have different pluralization rules that increase the difficulty of translation. Because other locales express cardinality differently, you may need to set pluralization categories that do not align with English. Use the `plural` clause to mark expressions that may not be meaningful if translated word-for-word. ``` { component_property, plural, pluralization_categories } ``` After the pluralization category, enter the default text (English) surrounded by open curly brace (`{`) and close curly brace (`}`) characters. ``` pluralization_category { } ``` The following pluralization categories are available for English and may change based on the locale. | Pluralization category | Details | Example | | --- | --- | --- | | `zero` | Quantity is zero | `=0 { }` `zero { }` | | `one` | Quantity is 1 | `=1 { }` `one { }` | | `two` | Quantity is 2 | `=2 { }` `two { }` | | `few` | Quantity is 2 or more | `few { }` | | `many` | Quantity is a large number | `many { }` | | `other` | The default quantity | `other { }` | If none of the pluralization categories match, Angular uses `other` to match the standard fallback for a missing category. ``` other { default_quantity } ``` > For more information about pluralization categories, see [Choosing plural category names](http://cldr.unicode.org/index/cldr-spec/plural-rules#TOC-Choosing-Plural-Category-Names "Choosing Plural Category Names - Plural Rules | CLDR - Unicode Common Locale Data Repository | Unicode") in the [CLDR - Unicode Common Locale Data Repository](https://cldr.unicode.org "Unicode CLDR Project"). > > Many locales don't support some of the pluralization categories. The default locale (`en-US`) uses a very simple `plural()` function that doesn't support the `few` pluralization category. Another locale with a simple `plural()` function is `es`. The following code example shows the [en-US `plural()`](https://github.com/angular/angular/blob/ecffc3557fe1bff9718c01277498e877ca44588d/packages/core/src/i18n/locale_en.ts#L14-L18 "Line 14 to 18 - angular/packages/core/src/i18n/locale_en.ts | angular/angular | GitHub") function. ``` function plural(n: number): number { let i = Math.floor(Math.abs(n)), v = n.toString().replace(/^[^.]*\.?/, '').length; if (i === 1 && v === 0) return 1; return 5; } ``` The `plural()` function only returns 1 (`one`) or 5 (`other`). The `few` category never matches. #### `minutes` example If you want to display the following phrase in English, where `x` is a number. ``` updated x minutes ago ``` And you also want to display the following phrases based on the cardinality of `x`. ``` updated just now ``` ``` updated one minute ago ``` Use HTML markup and [interpolations](glossary#interpolation "interpolation - Glossary | Angular"). The following code example shows how to use the `plural` clause to express the previous three situations in a `<span>` element. ``` <span i18n>Updated {minutes, plural, =0 {just now} =1 {one minute ago} other {{{minutes}} minutes ago}}</span> ``` Review the following details in the previous code example. | Parameters | Details | | --- | --- | | `minutes` | The first parameter specifies the component property is `minutes` and determines the number of minutes. | | `plural` | The second parameter specifies the ICU clause is `plural`. | | `=0 {just now}` | For zero minutes, the pluralization category is `=0`. The value is `just now`. | | `=1 {one minute}` | For one minute, the pluralization category is `=1`. The value is `one minute`. | | `other {{{minutes}} minutes ago}` | For any unmatched cardinality, the default pluralization category is `other`. The value is `{{minutes}} minutes ago`. | `{{minutes}}` is an [interpolation](glossary#interpolation "interpolation - Glossary | Angular"). ### Mark alternates and nested expressions The `select` clause marks choices for alternate text based on your defined string values. ``` { component_property, select, selection_categories } ``` Translate all of the alternates to display alternate text based on the value of a variable. After the selection category, enter the text (English) surrounded by open curly brace (`{`) and close curly brace (`}`) characters. ``` selection_category { text } ``` Different locales have different grammatical constructions that increase the difficulty of translation. Use HTML markup. If none of the selection categories match, Angular uses `other` to match the standard fallback for a missing category. ``` other { default_value } ``` #### `gender` example If you want to display the following phrase in English. ``` The author is other ``` And you also want to display the following phrases based on the `gender` property of the component. ``` The author is female ``` ``` The author is male ``` The following code example shows how to bind the `gender` property of the component and use the `select` clause to express the previous three situations in a `<span>` element. The `gender` property binds the outputs to each of following string values. | Value | English value | | --- | --- | | female | `female` | | male | `male` | | other | `other` | The `select` clause maps the values to the appropriate translations. The following code example shows `gender` property used with the select clause. ``` <span i18n>The author is {gender, select, male {male} female {female} other {other}}</span> ``` #### `gender` and `minutes` example Combine different clauses together, such as the `plural` and `select` clauses. The following code example shows nested clauses based on the `gender` and `minutes` examples. ``` <span i18n>Updated: {minutes, plural, =0 {just now} =1 {one minute ago} other {{{minutes}} minutes ago by {gender, select, male {male} female {female} other {other}}}} </span> ``` What's next ----------- * [Work with translation files](i18n-common-translation-files "Work with translation files | Angular") Last reviewed on Mon Feb 28 2022 angular View encapsulation View encapsulation ================== In Angular, a component's styles can be encapsulated within the component's host element so that they don't affect the rest of the application. The `[Component](../api/core/component)` decorator provides the [`encapsulation`](../api/core/component#encapsulation) option which can be used to control how the encapsulation is applied on a *per component* basis. Choose from the following modes: | Modes | Details | | --- | --- | | `[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)` | Angular uses the browser's built-in [Shadow DOM API](https://developer.mozilla.org/docs/Web/Web_Components/Shadow_DOM) to enclose the component's view inside a ShadowRoot, used as the component's host element, and apply the provided styles in an isolated manner. `[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)` only works on browsers that have built-in support for the shadow DOM (see [Can I use - Shadow DOM v1](https://caniuse.com/shadowdomv1)). Not all browsers support it, which is why the `[ViewEncapsulation.Emulated](../api/core/viewencapsulation#Emulated)` is the recommended and default mode. | | `[ViewEncapsulation.Emulated](../api/core/viewencapsulation#Emulated)` | Angular modifies the component's CSS selectors so that they are only applied to the component's view and do not affect other elements in the application, *emulating* Shadow DOM behavior. For more details, see [Inspecting generated CSS](view-encapsulation#inspect-generated-css). | | `[ViewEncapsulation.None](../api/core/viewencapsulation#None)` | Angular does not apply any sort of view encapsulation meaning that any styles specified for the component are actually globally applied and can affect any HTML element present within the application. This mode is essentially the same as including the styles into the HTML itself. | Inspecting generated CSS ------------------------ When using the emulated view encapsulation, Angular pre-processes all the component's styles so that they are only applied to the component's view. In the DOM of a running Angular application, elements belonging to components using emulated view encapsulation have some extra attributes attached to them: ``` <hero-details _nghost-pmm-5> <h2 _ngcontent-pmm-5>Mister Fantastic</h2> <hero-team _ngcontent-pmm-5 _nghost-pmm-6> <h3 _ngcontent-pmm-6>Team</h3> </hero-team> </hero-details> ``` Two kinds of these attributes exist: | Attributes | Details | | --- | --- | | `_nghost` | Are added to elements that enclose a component's view and that would be ShadowRoots in a native Shadow DOM encapsulation. This is typically the case for components' host elements. | | `_ngcontent` | Are added to child element within a component's view, those are used to match the elements with their respective emulated ShadowRoots (host elements with a matching `_nghost` attribute). | The exact values of these attributes are a private implementation detail of Angular. They are automatically created and you should never refer to them in application code. They are targeted by the created component styles, which are injected in the `<head>` section of the DOM: ``` [_nghost-pmm-5] { display: block; border: 1px solid black; } h3[_ngcontent-pmm-6] { background-color: white; border: 1px solid #777; } ``` These styles are post-processed so that each CSS selector is augmented with the appropriate `_nghost` or `_ngcontent` attribute. These modified selectors make sure the styles to be applied to components' views in an isolated and targeted fashion. Mixing encapsulation modes -------------------------- As mentioned earlier, you specify the encapsulation mode in the Component's decorator on a *per component* basis. This means that within your application you can have different components using different encapsulation strategies. Although possible, this is not recommended. If it is really needed, you should be aware of how the styles of components using different encapsulation modes interact with each other: | Modes | Details | | --- | --- | | `[ViewEncapsulation.Emulated](../api/core/viewencapsulation#Emulated)` | The styles of components are added to the `<head>` of the document, making them available throughout the application, but their selectors only affect elements within their respective components' templates. | | `[ViewEncapsulation.None](../api/core/viewencapsulation#None)` | The styles of components are added to the `<head>` of the document, making them available throughout the application, so are completely global and affect any matching elements within the document. | | `[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)` | The styles of components are only added to the shadow DOM host, ensuring that they only affect elements within their respective components' views. | > Styles of `[ViewEncapsulation.Emulated](../api/core/viewencapsulation#Emulated)` and `[ViewEncapsulation.None](../api/core/viewencapsulation#None)` components are also added to the shadow DOM host of each `[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)` component. > > This means that styles for components with `[ViewEncapsulation.None](../api/core/viewencapsulation#None)` affect matching elements within the shadow DOM. > > This approach may seem counter-intuitive at first. But without it a component with `[ViewEncapsulation.None](../api/core/viewencapsulation#None)` would be rendered differently within a component using `[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)`, since its styles would not be available. > > ### Examples This section shows examples of how the styling of components with different `[ViewEncapsulation](../api/core/viewencapsulation)` interact. See the to try out these components yourself. #### No encapsulation The first example shows a component that has `[ViewEncapsulation.None](../api/core/viewencapsulation#None)`. This component colors its template elements red. ``` @Component({ selector: 'app-no-encapsulation', template: ` <h2>None</h2> <div class="none-message">No encapsulation</div> `, styles: ['h2, .none-message { color: red; }'], encapsulation: ViewEncapsulation.None, }) export class NoEncapsulationComponent { } ``` Angular adds the styles for this component as global styles to the `<head>` of the document. As already mentioned, Angular also adds the styles to all shadow DOM hosts, making the styles available throughout the whole application. #### Emulated encapsulation The second example shows a component that has `[ViewEncapsulation.Emulated](../api/core/viewencapsulation#Emulated)`. This component colors its template elements green. ``` @Component({ selector: 'app-emulated-encapsulation', template: ` <h2>Emulated</h2> <div class="emulated-message">Emulated encapsulation</div> <app-no-encapsulation></app-no-encapsulation> `, styles: ['h2, .emulated-message { color: green; }'], encapsulation: ViewEncapsulation.Emulated, }) export class EmulatedEncapsulationComponent { } ``` Comparable to `[ViewEncapsulation.None](../api/core/viewencapsulation#None)`, Angular adds the styles for this component to the `<head>` of the document, but with "scoped" styles. Only the elements directly within this component's template are going to match its styles. Since the "scoped" styles from the `EmulatedEncapsulationComponent` are specific, they override the global styles from the `NoEncapsulationComponent`. In this example, the `EmulatedEncapsulationComponent` contains a `NoEncapsulationComponent`, but `NoEncapsulationComponent` is still styled as expected since the `EmulatedEncapsulationComponent` 's "scoped" styles do not match elements in its template. #### Shadow DOM encapsulation The third example shows a component that has `[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)`. This component colors its template elements blue. ``` @Component({ selector: 'app-shadow-dom-encapsulation', template: ` <h2>ShadowDom</h2> <div class="shadow-message">Shadow DOM encapsulation</div> <app-emulated-encapsulation></app-emulated-encapsulation> <app-no-encapsulation></app-no-encapsulation> `, styles: ['h2, .shadow-message { color: blue; }'], encapsulation: ViewEncapsulation.ShadowDom, }) export class ShadowDomEncapsulationComponent { } ``` Angular adds styles for this component only to the shadow DOM host, so they are not visible outside the shadow DOM. > **NOTE**: Angular also adds the global styles from the `NoEncapsulationComponent` and `EmulatedEncapsulationComponent` to the shadow DOM host. Those styles are still available to the elements in the template of this component. > > In this example, the `ShadowDomEncapsulationComponent` contains both a `NoEncapsulationComponent` and `EmulatedEncapsulationComponent`. The styles added by the `ShadowDomEncapsulationComponent` component are available throughout the shadow DOM of this component, and so to both the `NoEncapsulationComponent` and `EmulatedEncapsulationComponent`. The `EmulatedEncapsulationComponent` has specific "scoped" styles, so the styling of this component's template is unaffected. Since styles from `ShadowDomEncapsulationComponent` are added to the shadow host after the global styles, the `h2` style overrides the style from the `NoEncapsulationComponent`. The result is that the `<h2>` element in the `NoEncapsulationComponent` is colored blue rather than red, which may not be what the component's author intended. Last reviewed on Mon Feb 28 2022
programming_docs
angular Configuring dependency providers Configuring dependency providers ================================ The Creating and injecting services topic describes how to use classes as dependencies. Besides classes, you can also use other values such as Boolean, string, date, and objects as dependencies. Angular DI provides the necessary APIs to make the dependency configuration flexible, so you can make those values available in DI. Specifying a provider token --------------------------- If you specify the service class as the provider token, the default behavior is for the injector to instantiate that class using the `new` operator. In the following example, the `Logger` class provides a `Logger` instance. ``` providers: [Logger] ``` You can, however, configure a DI to use a different class or any other different value to associate with the `Logger` class. So when the `Logger` is injected, this new value is used instead. In fact, the class provider syntax is a shorthand expression that expands into a provider configuration, defined by the `[Provider](../api/core/provider)` interface. Angular expands the `providers` value in this case into a full provider object as follows: ``` [{ provide: Logger, useClass: Logger }] ``` The expanded provider configuration is an object literal with two properties: * The `provide` property holds the token that serves as the key for both locating a dependency value and configuring the injector. * The second property is a provider definition object, which tells the injector how to create the dependency value. The provider-definition key can be one of the following: + useClass - this option tells Angular DI to instantiate a provided class when a dependency is injected + useExisting - allows you to alias a token and reference any existing one. + useFactory - allows you to define a function that constructs a dependency. + useValue - provides a static value that should be used as a dependency. The section below describes how to use the mentioned provider definition keys. ### Class providers: useClass The `useClass` provider key lets you create and return a new instance of the specified class. You can use this type of provider to substitute an alternative implementation for a common or default class. The alternative implementation can, for example, implement a different strategy, extend the default class, or emulate the behavior of the real class in a test case. In the following example, the `BetterLogger` class would be instantiated when the `Logger` dependency is requested in a component or any other class. ``` [{ provide: Logger, useClass: BetterLogger }] ``` If the alternative class providers have their own dependencies, specify both providers in the providers metadata property of the parent module or component. ``` [ UserService, { provide: Logger, useClass: EvenBetterLogger }] ``` In this example, `EvenBetterLogger` displays the user name in the log message. This logger gets the user from an injected `UserService` instance. ``` @Injectable() export class EvenBetterLogger extends Logger { constructor(private userService: UserService) { super(); } override log(message: string) { const name = this.userService.user.name; super.log(`Message to ${name}: ${message}`); } } ``` Angular DI knows how to construct the `UserService` dependency, since it has been configured above and is available in the injector. ### Alias providers: useExisting The `useExisting` provider key lets you map one token to another. In effect, the first token is an alias for the service associated with the second token, creating two ways to access the same service object. In the following example, the injector injects the singleton instance of `NewLogger` when the component asks for either the new or the old logger. In this way, `OldLogger` is an alias for `NewLogger`. ``` [ NewLogger, // Alias OldLogger w/ reference to NewLogger { provide: OldLogger, useExisting: NewLogger}] ``` Ensure you do not alias `OldLogger` to `NewLogger` with `useClass`, as this creates two different `NewLogger` instances. ### Factory providers: useFactory The `useFactory` provider key lets you create a dependency object by calling a factory function. With this approach you can create a dynamic value based on information available in the DI and elsewhere in the app. In the following example, only authorized users should see secret heroes in the `HeroService`. Authorization can change during the course of a single application session, as when a different user logs in . To keep security-sensitive information in `UserService` and out of `HeroService`, give the `HeroService` constructor a boolean flag to control display of secret heroes. ``` constructor( private logger: Logger, private isAuthorized: boolean) { } getHeroes() { const auth = this.isAuthorized ? 'authorized ' : 'unauthorized'; this.logger.log(`Getting heroes for ${auth} user.`); return HEROES.filter(hero => this.isAuthorized || !hero.isSecret); } ``` To implement the `isAuthorized` flag, use a factory provider to create a new logger instance for `HeroService`. ``` const heroServiceFactory = (logger: Logger, userService: UserService) => new HeroService(logger, userService.user.isAuthorized); ``` The factory function has access to `UserService`. You inject both `Logger` and `UserService` into the factory provider so the injector can pass them along to the factory function. ``` export const heroServiceProvider = { provide: HeroService, useFactory: heroServiceFactory, deps: [Logger, UserService] }; ``` * The `useFactory` field specifies that the provider is a factory function whose implementation is `heroServiceFactory`. * The `deps` property is an array of provider tokens. The `Logger` and `UserService` classes serve as tokens for their own class providers. The injector resolves these tokens and injects the corresponding services into the matching `heroServiceFactory` factory function parameters. Capturing the factory provider in the exported variable, `heroServiceProvider`, makes the factory provider reusable. ### Value providers: useValue The `useValue` key lets you associate a fixed value with a DI token. Use this technique to provide runtime configuration constants such as website base addresses and feature flags. You can also use a value provider in a unit test to provide mock data in place of a production data service. The next section provides more information about the `useValue` key. Using an `[InjectionToken](../api/core/injectiontoken)` object -------------------------------------------------------------- Define and use an `[InjectionToken](../api/core/injectiontoken)` object for choosing a provider token for non-class dependencies. The following example defines a token, `APP_CONFIG` of the type `[InjectionToken](../api/core/injectiontoken)`. ``` import { InjectionToken } from '@angular/core'; export const APP_CONFIG = new InjectionToken<AppConfig>('app.config'); ``` The optional type parameter, `<AppConfig>`, and the token description, `app.config`, specify the token's purpose. Next, register the dependency provider in the component using the `[InjectionToken](../api/core/injectiontoken)` object of `APP_CONFIG`. ``` providers: [{ provide: APP_CONFIG, useValue: HERO_DI_CONFIG }] ``` Now, inject the configuration object into the constructor with `@[Inject](../api/core/inject)()` parameter decorator. ``` constructor(@Inject(APP_CONFIG) config: AppConfig) { this.title = config.title; } ``` ### Interfaces and DI Though the TypeScript `AppConfig` interface supports typing within the class, the `AppConfig` interface plays no role in DI. In TypeScript, an interface is a design-time artifact, and does not have a runtime representation, or token, that the DI framework can use. When the transpiler changes TypeScript to JavaScript, the interface disappears because JavaScript doesn't have interfaces. Because there is no interface for Angular to find at runtime, the interface cannot be a token, nor can you inject it. ``` // Can't use interface as provider token [{ provide: AppConfig, useValue: HERO_DI_CONFIG })] ``` ``` // Can't inject using the interface as the parameter type constructor(private config: AppConfig){ } ``` What's next ----------- * [Dependency Injection in Action](dependency-injection-in-action) Last reviewed on Tue Aug 02 2022 angular NgModule FAQ NgModule FAQ ============ NgModules help organize an application into cohesive blocks of functionality. This page answers the questions many developers ask about NgModule design and implementation. What classes should I add to the `declarations` array? ------------------------------------------------------ Add [declarable](bootstrapping#the-declarations-array) classes —components, directives, and pipes— to a `declarations` list. Declare these classes in *exactly one* module of the application. Declare them in a module if they belong to that particular module. What is a `declarable`? ----------------------- Declarables are the class types —components, directives, and pipes— that you can add to a module's `declarations` list. They're the only classes that you can add to `declarations`. What classes should I `not` add to `declarations`? -------------------------------------------------- Add only [declarable](bootstrapping#the-declarations-array) classes to an NgModule's `declarations` list. Do *not* declare the following: * A class that's already declared in another module, whether an application module, @NgModule, or third-party module. * An array of directives imported from another module. For example, don't declare `FORMS_DIRECTIVES` from `@angular/forms` because the `[FormsModule](../api/forms/formsmodule)` already declares it. * Module classes. * Service classes. * Non-Angular classes and objects, such as strings, numbers, functions, entity models, configurations, business logic, and helper classes. Why list the same component in multiple `[NgModule](../api/core/ngmodule)` properties? -------------------------------------------------------------------------------------- `AppComponent` is often listed in both `declarations` and `bootstrap`. You might see the same component listed in `declarations` and `exports`. While that seems redundant, these properties have different functions. Membership in one list doesn't imply membership in another list. * `AppComponent` could be declared in this module but not bootstrapped. * `AppComponent` could be bootstrapped in this module but declared in a different feature module. * A component could be imported from another application module (so you can't declare it) and re-exported by this module. * A component could be exported for inclusion in an external component's template as well as dynamically loaded in a pop-up dialog. What does "Can't bind to 'x' since it isn't a known property of 'y'" mean? -------------------------------------------------------------------------- This error often means that you haven't declared the directive "x" or haven't imported the NgModule to which "x" belongs. > Perhaps you declared "x" in an application submodule but forgot to export it. The "x" class isn't visible to other modules until you add it to the `exports` list. > > What should I import? --------------------- Import NgModules whose public (exported) [declarable classes](bootstrapping#the-declarations-array) you need to reference in this module's component templates. This always means importing `[CommonModule](../api/common/commonmodule)` from `@angular/common` for access to the Angular directives such as `[NgIf](../api/common/ngif)` and `[NgFor](../api/common/ngfor)`. You can import it directly or from another NgModule that [re-exports](ngmodule-faq#q-reexport) it. Import `[FormsModule](../api/forms/formsmodule)` from `@angular/forms` if your components have `[([ngModel](../api/forms/ngmodel))]` two-way binding expressions. Import *shared* and *feature* modules when this module's components incorporate their components, directives, and pipes. Import [BrowserModule](ngmodule-faq#q-browser-vs-common-module) only in the root `AppModule`. Should I import `[BrowserModule](../api/platform-browser/browsermodule)` or `[CommonModule](../api/common/commonmodule)`? ------------------------------------------------------------------------------------------------------------------------- The root application module, `AppModule`, of almost every browser application should import `[BrowserModule](../api/platform-browser/browsermodule)` from `@angular/platform-browser`. `[BrowserModule](../api/platform-browser/browsermodule)` provides services that are essential to launch and run a browser application. `[BrowserModule](../api/platform-browser/browsermodule)` also re-exports `[CommonModule](../api/common/commonmodule)` from `@angular/common`, which means that components in the `AppModule` also have access to the Angular directives every application needs, such as `[NgIf](../api/common/ngif)` and `[NgFor](../api/common/ngfor)`. Do not import `[BrowserModule](../api/platform-browser/browsermodule)` in any other module. *Feature modules* and *lazy-loaded modules* should import `[CommonModule](../api/common/commonmodule)` instead. They need the common directives. They don't need to re-install the app-wide providers. Importing `[CommonModule](../api/common/commonmodule)` also frees feature modules for use on *any* target platform, not just browsers. What if I import the same module twice? --------------------------------------- That's not a problem. When three modules all import Module 'A', Angular evaluates Module 'A' once, the first time it encounters it, and doesn't do so again. That's true at whatever level `A` appears in a hierarchy of imported NgModules. When Module 'B' imports Module 'A', Module 'C' imports 'B', and Module 'D' imports `[C, B, A]`, then 'D' triggers the evaluation of 'C', which triggers the evaluation of 'B', which evaluates 'A'. When Angular gets to the 'B' and 'A' in 'D', they're already cached and ready to go. Angular doesn't like NgModules with circular references, so don't let Module 'A' import Module 'B', which imports Module 'A'. What should I export? --------------------- Export [declarable](bootstrapping#the-declarations-array) classes that components in *other* NgModules are able to reference in their templates. These are your *public* classes. If you don't export a declarable class, it stays *private*, visible only to other components declared in this NgModule. You *can* export any declarable class —components, directives, and pipes— whether it's declared in this NgModule or in an imported NgModule. You *can* re-export entire imported NgModules, which effectively re-export all of their exported classes. An NgModule can even export a module that it doesn't import. What should I `not` export? --------------------------- Don't export the following: * Private components, directives, and pipes that you need only within components declared in this NgModule. If you don't want another NgModule to see it, don't export it. * Non-declarable objects such as services, functions, configurations, and entity models. * Components that are only loaded dynamically by the router or by bootstrapping. Such [entry components](ngmodule-faq#q-entry-component-defined) can never be selected in another component's template. While there's no harm in exporting them, there's also no benefit. * Pure service modules that don't have public (exported) declarations. For example, there's no point in re-exporting `[HttpClientModule](../api/common/http/httpclientmodule)` because it doesn't export anything. Its only purpose is to add http service providers to the application as a whole. Can I re-export classes and modules? ------------------------------------ Absolutely. NgModules are a great way to selectively aggregate classes from other NgModules and re-export them in a consolidated, convenience module. An NgModule can re-export entire NgModules, which effectively re-exports all of their exported classes. Angular's own `[BrowserModule](../api/platform-browser/browsermodule)` exports a couple of NgModules like this: ``` exports: [CommonModule, ApplicationModule] ``` An NgModule can export a combination of its own declarations, selected imported classes, and imported NgModules. Don't bother re-exporting pure service modules. Pure service modules don't export [declarable](bootstrapping#the-declarations-array) classes that another NgModule could use. For example, there's no point in re-exporting `[HttpClientModule](../api/common/http/httpclientmodule)` because it doesn't export anything. Its only purpose is to add http service providers to the application as a whole. What is the `forRoot()` method? ------------------------------- The `forRoot()` static method is a convention that makes it easy for developers to configure services and providers that are intended to be singletons. A good example of `forRoot()` is the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method. Applications pass a `[Routes](../api/router/routes)` array to `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` in order to configure the app-wide `[Router](../api/router/router)` service with routes. `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` returns a [ModuleWithProviders](../api/core/modulewithproviders). You add that result to the `imports` list of the root `AppModule`. Only call and import a `forRoot()` result in the root application module, `AppModule`. Avoid importing it in any other module, particularly in a lazy-loaded module. For more information on `forRoot()` see [the `forRoot()` pattern](singleton-services#the-forroot-pattern) section of the [Singleton Services](singleton-services) guide. > **NOTE**: The `forRoot()` import can be used in a module other than `AppModule`. Importantly, `forRoot()` should only be called once, and the module that imports the `forRoot()` needs to be available to the root `ModuleInjector`. For more information, refer to the guide on [Hierarchical injectors](hierarchical-dependency-injection#moduleinjector). > > For a service, instead of using `forRoot()`, specify `providedIn: 'root'` on the service's `@[Injectable](../api/core/injectable)()` decorator, which makes the service automatically available to the whole application and thus singleton by default. `[RouterModule](../api/router/routermodule)` also offers a `forChild()` static method for configuring the routes of lazy-loaded modules. `forRoot()` and `forChild()` are conventional names for methods that configure services in root and feature modules respectively. Follow this convention when you write similar modules with configurable service providers. Why is a service provided in a feature module visible everywhere? ----------------------------------------------------------------- Providers listed in the `@[NgModule.providers](../api/core/ngmodule#providers)` of a bootstrapped module have application scope. Adding a service provider to `@[NgModule.providers](../api/core/ngmodule#providers)` effectively publishes the service to the entire application. When you import an NgModule, Angular adds the module's service providers (the contents of its `providers` list) to the application root injector. This makes the provider visible to every class in the application that knows the provider's lookup token, or name. Extensibility through NgModule imports is a primary goal of the NgModule system. Merging NgModule providers into the application injector makes it easy for a module library to enrich the entire application with new services. By adding the `[HttpClientModule](../api/common/http/httpclientmodule)` once, every application component can make HTTP requests. However, this might feel like an unwelcome surprise if you expect the module's services to be visible only to the components declared by that feature module. If the `HeroModule` provides the `HeroService` and the root `AppModule` imports `HeroModule`, any class that knows the `HeroService` *type* can inject that service, not just the classes declared in the `HeroModule`. To limit access to a service, consider lazy loading the NgModule that provides that service. See [How do I restrict service scope to a module?](ngmodule-faq#service-scope) for more information. Why is a service provided in a lazy-loaded module visible only to that module? ------------------------------------------------------------------------------ Unlike providers of the modules loaded at launch, providers of lazy-loaded modules are *module-scoped*. When the Angular router lazy-loads a module, it creates a new execution context. That [context has its own injector](ngmodule-faq#q-why-child-injector "Why Angular creates a child injector"), which is a direct child of the application injector. The router adds the lazy module's providers and the providers of its imported NgModules to this child injector. These providers are insulated from changes to application providers with the same lookup token. When the router creates a component within the lazy-loaded context, Angular prefers service instances created from these providers to the service instances of the application root injector. What if two modules provide the same service? --------------------------------------------- When two imported modules, loaded at the same time, list a provider with the same token, the second module's provider "wins". That's because both providers are added to the same injector. When Angular looks to inject a service for that token, it creates and delivers the instance created by the second provider. *Every* class that injects this service gets the instance created by the second provider. Even classes declared within the first module get the instance created by the second provider. If NgModule A provides a service for token 'X' and imports an NgModule B that also provides a service for token 'X', then NgModule A's service definition "wins". The service provided by the root `AppModule` takes precedence over services provided by imported NgModules. The `AppModule` always wins. How do I restrict service scope to a module? -------------------------------------------- When a module is loaded at application launch, its `@[NgModule.providers](../api/core/ngmodule#providers)` have *application-wide scope*; that is, they are available for injection throughout the application. Imported providers are easily replaced by providers from another imported NgModule. Such replacement might be by design. It could be unintentional and have adverse consequences. As a general rule, import modules with providers *exactly once*, preferably in the application's *root module*. That's also usually the best place to configure, wrap, and override them. Suppose a module requires a customized `[HttpBackend](../api/common/http/httpbackend)` that adds a special header for all Http requests. If another module elsewhere in the application also customizes `[HttpBackend](../api/common/http/httpbackend)` or merely imports the `[HttpClientModule](../api/common/http/httpclientmodule)`, it could override this module's `[HttpBackend](../api/common/http/httpbackend)` provider, losing the special header. The server will reject http requests from this module. To avoid this problem, import the `[HttpClientModule](../api/common/http/httpclientmodule)` only in the `AppModule`, the application *root module*. If you must guard against this kind of "provider corruption", *don't rely on a launch-time module's `providers`*. Load the module lazily if you can. Angular gives a [lazy-loaded module](ngmodule-faq#q-lazy-loaded-module-provider-visibility) its own child injector. The module's providers are visible only within the component tree created with this injector. If you must load the module eagerly, when the application starts, *provide the service in a component instead.* Continuing with the same example, suppose the components of a module truly require a private, custom `[HttpBackend](../api/common/http/httpbackend)`. Create a "top component" that acts as the root for all of the module's components. Add the custom `[HttpBackend](../api/common/http/httpbackend)` provider to the top component's `providers` list rather than the module's `providers`. Recall that Angular creates a child injector for each component instance and populates the injector with the component's own providers. When a child of this component asks for the `[HttpBackend](../api/common/http/httpbackend)` service, Angular provides the local `[HttpBackend](../api/common/http/httpbackend)` service, not the version provided in the application root injector. Child components make proper HTTP requests no matter what other modules do to `[HttpBackend](../api/common/http/httpbackend)`. Be sure to create module components as children of this module's top component. You can embed the child components in the top component's template. Alternatively, make the top component a routing host by giving it a `<[router-outlet](../api/router/routeroutlet)>`. Define child routes and let the router load module components into that outlet. Though you can limit access to a service by providing it in a lazy loaded module or providing it in a component, providing services in a component can lead to multiple instances of those services. Thus, the lazy loading is preferable. Should I add application-wide providers to the root `AppModule` or the root `AppComponent`? ------------------------------------------------------------------------------------------- Define application-wide providers by specifying `providedIn: 'root'` on its `@[Injectable](../api/core/injectable)()` decorator (in the case of services) or at `[InjectionToken](../api/core/injectiontoken)` construction (in the case where tokens are provided). Providers that are created this way automatically are made available to the entire application and don't need to be listed in any module. If a provider cannot be configured in this way (perhaps because it has no sensible default value), then register application-wide providers in the root `AppModule`, not in the `AppComponent`. Lazy-loaded modules and their components can inject `AppModule` services; they can't inject `AppComponent` services. Register a service in `AppComponent` providers *only* if the service must be hidden from components outside the `AppComponent` tree. This is a rare use case. More generally, [prefer registering providers in NgModules](ngmodule-faq#q-component-or-module) to registering in components. ### Discussion Angular registers all startup module providers with the application root injector. The services that root injector providers create have application scope, which means they are available to the entire application. Certain services, such as the `[Router](../api/router/router)`, only work when you register them in the application root injector. By contrast, Angular registers `AppComponent` providers with the `AppComponent`'s own injector. `AppComponent` services are available only to that component and its component tree. They have component scope. The `AppComponent`'s injector is a child of the root injector, one down in the injector hierarchy. For applications that don't use the router, that's almost the entire application. But in routed applications, routing operates at the root level where `AppComponent` services don't exist. This means that lazy-loaded modules can't reach them. Should I add other providers to a module or a component? -------------------------------------------------------- Providers should be configured using `@[Injectable](../api/core/injectable)` syntax. If possible, they should be provided in the application root (`providedIn: 'root'`). Services that are configured this way are lazily loaded if they are only used from a lazily loaded context. If it's the consumer's decision whether a provider is available application-wide or not, then register providers in modules (`@[NgModule.providers](../api/core/ngmodule#providers)`) instead of registering in components (`@Component.providers`). Register a provider with a component when you *must* limit the scope of a service instance to that component and its component tree. Apply the same reasoning to registering a provider with a directive. For example, an editing component that needs a private copy of a caching service should register the service with the component. Then each new instance of the component gets its own cached service instance. The changes that editor makes in its service don't touch the instances elsewhere in the application. [Always register *application-wide* services with the root `AppModule`](ngmodule-faq#q-root-component-or-module), not the root `AppComponent`. Why is it bad if a shared module provides a service to a lazy-loaded module? ---------------------------------------------------------------------------- ### The eagerly loaded scenario When an eagerly loaded module provides a service, for example a `UserService`, that service is available application-wide. If the root module provides `UserService` and imports another module that provides the same `UserService`, Angular registers one of them in the root application injector (see [What if I import the same module twice?](ngmodule-faq#q-reimport)). Then, when some component injects `UserService`, Angular finds it in the application root injector, and delivers the app-wide singleton service. No problem. ### The lazy loaded scenario Now consider a lazy loaded module that also provides a service called `UserService`. When the router lazy loads a module, it creates a child injector and registers the `UserService` provider with that child injector. The child injector is *not* the root injector. When Angular creates a lazy component for that module and injects `UserService`, it finds a `UserService` provider in the lazy module's *child injector* and creates a *new* instance of the `UserService`. This is an entirely different `UserService` instance than the app-wide singleton version that Angular injected in one of the eagerly loaded components. This scenario causes your application to create a new instance every time, instead of using the singleton. Why does lazy loading create a child injector? ---------------------------------------------- Angular adds `@[NgModule.providers](../api/core/ngmodule#providers)` to the application root injector, unless the NgModule is lazy-loaded. For a lazy-loaded NgModule, Angular creates a *child injector* and adds the module's providers to the child injector. This means that an NgModule behaves differently depending on whether it's loaded during application start or lazy-loaded later. Neglecting that difference can lead to [adverse consequences](ngmodule-faq#q-why-bad). Why doesn't Angular add lazy-loaded providers to the application root injector as it does for eagerly loaded NgModules? The answer is grounded in a fundamental characteristic of the Angular dependency-injection system. An injector can add providers *until it's first used*. Once an injector starts creating and delivering services, its provider list is frozen; no new providers are allowed. When an application starts, Angular first configures the root injector with the providers of all eagerly loaded NgModules *before* creating its first component and injecting any of the provided services. Once the application begins, the application root injector is closed to new providers. Time passes and application logic triggers lazy loading of an NgModule. Angular must add the lazy-loaded module's providers to an injector somewhere. It can't add them to the application root injector because that injector is closed to new providers. So Angular creates a new child injector for the lazy-loaded module context. How can I tell if an NgModule or service was previously loaded? --------------------------------------------------------------- Some NgModules and their services should be loaded only once by the root `AppModule`. Importing the module a second time by lazy loading a module could [produce errant behavior](ngmodule-faq#q-why-bad) that may be difficult to detect and diagnose. To prevent this issue, write a constructor that attempts to inject the module or service from the root application injector. If the injection succeeds, the class has been loaded a second time. You can throw an error or take other remedial action. Certain NgModules, such as `[BrowserModule](../api/platform-browser/browsermodule)`, implement such a guard. Here is a custom constructor for an NgModule called `GreetingModule`. ``` constructor(@Optional() @SkipSelf() parentModule?: GreetingModule) { if (parentModule) { throw new Error( 'GreetingModule is already loaded. Import it in the AppModule only'); } } ``` What is an `entry component`? ----------------------------- An entry component is any component that Angular loads *imperatively* by type. A component loaded *declaratively* by way of its selector is *not* an entry component. Angular loads a component declaratively when using the component's selector to locate the element in the template. Angular then creates the HTML representation of the component and inserts it into the DOM at the selected element. These aren't entry components. The bootstrapped root `AppComponent` is an *entry component*. True, its selector matches an element tag in `index.html`. But `index.html` isn't a component template and the `AppComponent` selector doesn't match an element in any component template. Components in route definitions are also *entry components*. A route definition refers to a component by its *type*. The router ignores a routed component's selector, if it even has one, and loads the component dynamically into a `[RouterOutlet](../api/router/routeroutlet)`. For more information, see [Entry Components](entry-components). What kinds of modules should I have and how should I use them? -------------------------------------------------------------- Every application is different. Developers have various levels of experience and comfort with the available choices. Some suggestions and guidelines appear to have wide appeal. ### `SharedModule` `SharedModule` is a conventional name for an `[NgModule](../api/core/ngmodule)` with the components, directives, and pipes that you use everywhere in your application. This module should consist entirely of `declarations`, most of them exported. The `SharedModule` may re-export other widget modules, such as `[CommonModule](../api/common/commonmodule)`, `[FormsModule](../api/forms/formsmodule)`, and NgModules with the UI controls that you use most widely. The `SharedModule` should not have `providers` for reasons [explained previously](ngmodule-faq#q-why-bad). Nor should any of its imported or re-exported modules have `providers`. Import the `SharedModule` in your *feature* modules, both those loaded when the application starts and those you lazy load later. ### Feature Modules Feature modules are modules you create around specific application business domains, user workflows, and utility collections. They support your application by containing a particular feature, such as routes, services, widgets, etc. To conceptualize what a feature module might be in your app, consider that if you would put the files related to a certain functionality, like a search, in one folder, that the contents of that folder would be a feature module that you might call your `SearchModule`. It would contain all of the components, routing, and templates that would make up the search functionality. For more information, see [Feature Modules](feature-modules) and [Module Types](module-types) What's the difference between NgModules and JavaScript Modules? --------------------------------------------------------------- In an Angular app, NgModules and JavaScript modules work together. In modern JavaScript, every file is a module (see the [Modules](https://exploringjs.com/es6/ch_modules.html) page of the Exploring ES6 website). Within each file you write an `export` statement to make parts of the module public. An Angular NgModule is a class with the `@[NgModule](../api/core/ngmodule)` decorator —JavaScript modules don't have to have the `@[NgModule](../api/core/ngmodule)` decorator. Angular's `[NgModule](../api/core/ngmodule)` has `imports` and `exports` and they serve a similar purpose. You *import* other NgModules so you can use their exported classes in component templates. You *export* this NgModule's classes so they can be imported and used by components of *other* NgModules. For more information, see [JavaScript Modules vs. NgModules](ngmodule-vs-jsmodule). How does Angular find components, directives, and pipes in a template? What is a **template reference**? -------------------------------------------------------------------------------------------------------- The [Angular compiler](ngmodule-faq#q-angular-compiler) looks inside component templates for other components, directives, and pipes. When it finds one, that's a template reference. The Angular compiler finds a component or directive in a template when it can match the *selector* of that component or directive to some HTML in that template. The compiler finds a pipe if the pipe's *name* appears within the pipe syntax of the template HTML. Angular only matches selectors and pipe names for classes that are declared by this module or exported by a module that this module imports. What is the Angular compiler? ----------------------------- The Angular compiler converts the application code you write into highly performant JavaScript code. The `@[NgModule](../api/core/ngmodule)` metadata plays an important role in guiding the compilation process. The code you write isn't immediately executable. For example, components have templates that contain custom elements, attribute directives, Angular binding declarations, and some peculiar syntax that clearly isn't native HTML. The Angular compiler reads the template markup, combines it with the corresponding component class code, and emits *component factories*. A component factory creates a pure, 100% JavaScript representation of the component that incorporates everything described in its `@[Component](../api/core/component)` metadata: The HTML, the binding instructions, the attached styles. Because directives and pipes appear in component templates, the Angular compiler incorporates them into compiled component code too. `@[NgModule](../api/core/ngmodule)` metadata tells the Angular compiler what components to compile for this module and how to link this module with other modules. Last reviewed on Mon Feb 28 2022
programming_docs
angular Property binding best practices Property binding best practices =============================== By following a few guidelines, you can use property binding in a way that helps you reduce bugs and keep your code readable. > See the for a working example containing the code snippets in this guide. > > Avoid side effects ------------------ Evaluation of a template expression should have no visible side effects. Use the syntax for template expressions to help avoid side effects. In general, the correct syntax prevents you from assigning a value to anything in a property binding expression. The syntax also prevents you from using increment and decrement operators. ### An example of producing side effects If you had an expression that changed the value of something else that you were binding to, that change of value would be a side effect. Angular might or might not display the changed value. If Angular does detect the change, it throws an error. As a best practice, use only properties and methods that return values. Return the proper type ---------------------- A template expression should result in the type of value that the target property expects. For example, return: * a `string`, if the target property expects a string * a `number`, if it expects a number * an `object`, if it expects an object. ### Passing in a string In the following example, the `childItem` property of the `ItemDetailComponent` expects a string. ``` <app-item-detail [childItem]="parentItem"></app-item-detail> ``` Confirm this expectation by looking in the `ItemDetailComponent` where the `@[Input](../api/core/input)()` type is `string`: ``` @Input() childItem = ''; ``` The `parentItem` in `AppComponent` is a string, which means that the expression, `parentItem` within `[childItem]="parentItem"`, evaluates to a string. ``` parentItem = 'lamp'; ``` If `parentItem` were some other type, you would need to specify `childItem` `@[Input](../api/core/input)()` as that type as well. ### Passing in an object In this example, `ItemListComponent` is a child component of `AppComponent` and the `items` property expects an array of objects. ``` <app-item-list [items]="currentItems"></app-item-list> ``` In the `ItemListComponent` the `@[Input](../api/core/input)()`, `items`, has a type of `Item[]`. ``` @Input() items: Item[] = []; ``` Notice that `Item` is an object that it has two properties, an `id` and a `name`. ``` export interface Item { id: number; name: string; } ``` In `app.component.ts`, `currentItems` is an array of objects in the same shape as the `Item` object in `items.ts`, with an `id` and a `name`. ``` currentItems = [{ id: 21, name: 'phone' }]; ``` By supplying an object in the same shape, you meet the expectations of `items` when Angular evaluates the expression `currentItems`. Last reviewed on Mon Feb 28 2022 angular Building and serving Angular apps Building and serving Angular apps ================================= This page discusses build-specific configuration options for Angular projects. Configuring application environments ------------------------------------ You can define different named build configurations for your project, such as `development` and `staging`, with different defaults. Each named configuration can have defaults for any of the options that apply to the various [builder targets](glossary#target), such as `build`, `serve`, and `test`. The [Angular CLI](cli) `build`, `serve`, and `test` commands can then replace files with appropriate versions for your intended target environment. ### Configure environment-specific defaults Using the Angular CLI, start by running the [generate environments command](cli/generate#environments-command) shown here to create the `src/environments/` directory and configure the project to use these files. ``` ng generate environments ``` The project's `src/environments/` directory contains the base configuration file, `environment.ts`, which provides configuration for `production`, the default environment. You can override default values for additional environments, such as `development` and `staging`, in target-specific configuration files. For example: ``` myProject/src/environments environment.ts environment.development.ts environment.staging.ts ``` The base file `environment.ts`, contains the default environment settings. For example: ``` export const environment = { production: true }; ``` The `build` command uses this as the build target when no environment is specified. You can add further variables, either as additional properties on the environment object, or as separate objects. For example, the following adds a default for a variable to the default environment: ``` export const environment = { production: true, apiUrl: 'http://my-prod-url' }; ``` You can add target-specific configuration files, such as `environment.development.ts`. The following content sets default values for the development build target: ``` export const environment = { production: false, apiUrl: 'http://my-api-url' }; ``` ### Using environment-specific variables in your app The following application structure configures build targets for `development` and `staging` environments: ``` src app app.component.html app.component.ts environments environment.ts environment.development.ts environment.staging.ts ``` To use the environment configurations you have defined, your components must import the original environments file: ``` import { environment } from './../environments/environment'; ``` This ensures that the build and serve commands can find the configurations for specific build targets. The following code in the component file (`app.component.ts`) uses an environment variable defined in the configuration files. ``` import { Component } from '@angular/core'; import { environment } from './../environments/environment'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { constructor() { console.log(environment.production); // Logs false for development environment } title = 'app works!'; } ``` Configure target-specific file replacements ------------------------------------------- The main CLI configuration file, `angular.json`, contains a `fileReplacements` section in the configuration for each build target, which lets you replace any file in the TypeScript program with a target-specific version of that file. This is useful for including target-specific code or variables in a build that targets a specific environment, such as production or staging. By default no files are replaced. You can add file replacements for specific build targets. For example: ``` "configurations": { "development": { "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.development.ts" } ], … ``` This means that when you build your development configuration with `ng build --configuration development`, the `src/environments/environment.ts` file is replaced with the target-specific version of the file, `src/environments/environment.development.ts`. You can add additional configurations as required. To add a staging environment, create a copy of `src/environments/environment.ts` called `src/environments/environment.staging.ts`, then add a `staging` configuration to `angular.json`: ``` "configurations": { "development": { … }, "production": { … }, "staging": { "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.staging.ts" } ] } } ``` You can add more configuration options to this target environment as well. Any option that your build supports can be overridden in a build target configuration. To build using the staging configuration, run the following command: ``` ng build --configuration=staging ``` You can also configure the `serve` command to use the targeted build configuration if you add it to the "serve:configurations" section of `angular.json`: ``` "serve": { "builder": "@angular-devkit/build-angular:dev-server", "options": { "browserTarget": "your-project-name:build" }, "configurations": { "development": { "browserTarget": "your-project-name:build:development" }, "production": { "browserTarget": "your-project-name:build:production" }, "staging": { "browserTarget": "your-project-name:build:staging" } } }, ``` Configuring size budgets ------------------------ As applications grow in functionality, they also grow in size. The CLI lets you set size thresholds in your configuration to ensure that parts of your application stay within size boundaries that you define. Define your size boundaries in the CLI configuration file, `angular.json`, in a `budgets` section for each [configured environment](build#app-environments). ``` { … "configurations": { "production": { … "budgets": [] } } } ``` You can specify size budgets for the entire app, and for particular parts. Each budget entry configures a budget of a given type. Specify size values in the following formats: | Size value | Details | | --- | --- | | `123` or `123b` | Size in bytes. | | `123kb` | Size in kilobytes. | | `123mb` | Size in megabytes. | | `12%` | Percentage of size relative to baseline. (Not valid for baseline values.) | When you configure a budget, the build system warns or reports an error when a given part of the application reaches or exceeds a boundary size that you set. Each budget entry is a JSON object with the following properties: | Property | Value | | --- | --- | | type | The type of budget. One of: | Value | Details | | --- | --- | | `bundle` | The size of a specific bundle. | | `initial` | The size of JavaScript needed for bootstrapping the application. Defaults to warning at 500kb and erroring at 1mb. | | `allScript` | The size of all scripts. | | `all` | The size of the entire application. | | `anyComponentStyle` | This size of any one component stylesheet. Defaults to warning at 2kb and erroring at 4kb. | | `anyScript` | The size of any one script. | | `any` | The size of any file. | | | name | The name of the bundle (for `type=bundle`). | | baseline | The baseline size for comparison. | | maximumWarning | The maximum threshold for warning relative to the baseline. | | maximumError | The maximum threshold for error relative to the baseline. | | minimumWarning | The minimum threshold for warning relative to the baseline. | | minimumError | The minimum threshold for error relative to the baseline. | | warning | The threshold for warning relative to the baseline (min & max). | | error | The threshold for error relative to the baseline (min & max). | Configuring CommonJS dependencies --------------------------------- > It is recommended that you avoid depending on CommonJS modules in your Angular applications. Depending on CommonJS modules can prevent bundlers and minifiers from optimizing your application, which results in larger bundle sizes. Instead, it is recommended that you use [ECMAScript modules](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/import) in your entire application. For more information, see [How CommonJS is making your bundles larger](https://web.dev/commonjs-larger-bundles). > > The Angular CLI outputs warnings if it detects that your browser application depends on CommonJS modules. To disable these warnings, add the CommonJS module name to `allowedCommonJsDependencies` option in the `build` options located in `angular.json` file. ``` "build": { "builder": "@angular-devkit/build-angular:browser", "options": { "allowedCommonJsDependencies": [ "lodash" ] … } … }, ``` Configuring browser compatibility --------------------------------- The Angular CLI uses [Browserslist](https://github.com/browserslist/browserslist) to ensure compatibility with different browser versions. [Autoprefixer](https://github.com/postcss/autoprefixer) is used for CSS vendor prefixing and [@babel/preset-env](https://babeljs.io/docs/en/babel-preset-env) for JavaScript syntax transformations. Internally, the Angular CLI uses the below `browserslist` configuration which matches the [browsers that are supported](browser-support) by Angular. ``` last 2 Chrome versions last 1 Firefox version last 2 Edge major versions last 2 Safari major versions last 2 iOS major versions Firefox ESR ``` To override the internal configuration, run [`ng generate config browserslist`](cli/generate#config-command), which generates a `.browserslistrc` configuration file in the the project directory. See the [browserslist repository](https://github.com/browserslist/browserslist) for more examples of how to target specific browsers and versions. > Use [browsersl.ist](https://browsersl.ist) to display compatible browsers for a `browserslist` query. > > Proxying to a backend server ---------------------------- Use the [proxying support](https://webpack.js.org/configuration/dev-server/#devserverproxy) in the `webpack` development server to divert certain URLs to a backend server, by passing a file to the `--proxy-config` build option. For example, to divert all calls for `http://localhost:4200/api` to a server running on `http://localhost:3000/api`, take the following steps. 1. Create a file `proxy.conf.json` in your project's `src/` folder. 2. Add the following content to the new proxy file: ``` { "/api": { "target": "http://localhost:3000", "secure": false } } ``` 3. In the CLI configuration file, `angular.json`, add the `proxyConfig` option to the `serve` target: ``` … "architect": { "serve": { "builder": "@angular-devkit/build-angular:dev-server", "options": { "browserTarget": "your-application-name:build", "proxyConfig": "src/proxy.conf.json" }, … ``` 4. To run the development server with this proxy configuration, call `ng serve`. Edit the proxy configuration file to add configuration options; following are some examples. For a description of all options, see [webpack DevServer documentation](https://webpack.js.org/configuration/dev-server/#devserverproxy). > **NOTE**: If you edit the proxy configuration file, you must relaunch the `ng serve` process to make your changes effective. > > ### Rewrite the URL path The `pathRewrite` proxy configuration option lets you rewrite the URL path at run time. For example, specify the following `pathRewrite` value to the proxy configuration to remove "api" from the end of a path. ``` { "/api": { "target": "http://localhost:3000", "secure": false, "pathRewrite": { "^/api": "" } } } ``` If you need to access a backend that is not on `localhost`, set the `changeOrigin` option as well. For example: ``` { "/api": { "target": "http://npmjs.org", "secure": false, "pathRewrite": { "^/api": "" }, "changeOrigin": true } } ``` To help determine whether your proxy is working as intended, set the `logLevel` option. For example: ``` { "/api": { "target": "http://localhost:3000", "secure": false, "pathRewrite": { "^/api": "" }, "logLevel": "debug" } } ``` Proxy log levels are `info` (the default), `debug`, `warn`, `error`, and `silent`. ### Proxy multiple entries You can proxy multiple entries to the same target by defining the configuration in JavaScript. Set the proxy configuration file to `proxy.conf.mjs` (instead of `proxy.conf.json`), and specify configuration files as in the following example. ``` export default [ { context: [ '/my', '/many', '/endpoints', '/i', '/need', '/to', '/proxy' ], target: 'http://localhost:3000', secure: false } ]; ``` In the CLI configuration file, `angular.json`, point to the JavaScript proxy configuration file: ``` … "architect": { "serve": { "builder": "@angular-devkit/build-angular:dev-server", "options": { "browserTarget": "your-application-name:build", "proxyConfig": "src/proxy.conf.mjs" }, … ``` ### Bypass the proxy If you need to optionally bypass the proxy, or dynamically change the request before it's sent, add the bypass option, as shown in this JavaScript example. ``` export default { '/api/proxy': { "target": 'http://localhost:3000', "secure": false, "bypass": function (req, res, proxyOptions) { if (req.headers.accept.includes('html')) { console.log('Skipping proxy for browser request.'); return '/index.html'; } req.headers['X-Custom-Header'] = 'yes'; } } }; ``` ### Using corporate proxy If you work behind a corporate proxy, the backend cannot directly proxy calls to any URL outside your local network. In this case, you can configure the backend proxy to redirect calls through your corporate proxy using an agent: ``` npm install --save-dev https-proxy-agent ``` When you define an environment variable `http_proxy` or `HTTP_PROXY`, an agent is automatically added to pass calls through your corporate proxy when running `npm start`. Use the following content in the JavaScript configuration file. ``` import HttpsProxyAgent from 'https-proxy-agent'; const proxyConfig = [{ context: '/api', target: 'http://your-remote-server.com:3000', secure: false }]; export default (proxyConfig) => { const proxyServer = process.env.http_proxy || process.env.HTTP_PROXY; if (proxyServer) { const agent = new HttpsProxyAgent(proxyServer); console.log('Using corporate proxy server: ' + proxyServer); for (const entry of proxyConfig) { entry.agent = agent; } } return proxyConfig; }; ``` Last reviewed on Tue Jan 17 2023 angular Accessibility in Angular Accessibility in Angular ======================== The web is used by a wide variety of people, including those who have visual or motor impairments. A variety of assistive technologies are available that make it much easier for these groups to interact with web-based software applications. Also, designing an application to be more accessible generally improves the user experience for all users. For an in-depth introduction to issues and techniques for designing accessible applications, see the [Accessibility](https://developers.google.com/web/fundamentals/accessibility/#what_is_accessibility) section of the Google's [Web Fundamentals](https://developers.google.com/web/fundamentals). This page discusses best practices for designing Angular applications that work well for all users, including those who rely on assistive technologies. > For the sample application that this page describes, see the live example. > > Accessibility attributes ------------------------ Building accessible web experience often involves setting [Accessible Rich Internet Applications (ARIA) attributes](https://developers.google.com/web/fundamentals/accessibility/semantics-aria) to provide semantic meaning where it might otherwise be missing. Use [attribute binding](attribute-binding) template syntax to control the values of accessibility-related attributes. When binding to ARIA attributes in Angular, you must use the `attr.` prefix. The ARIA specification depends specifically on HTML attributes rather than properties of DOM elements. ``` <!-- Use attr. when binding to an ARIA attribute --> <button [attr.aria-label]="myActionLabel">…</button> ``` > **NOTE** This syntax is only necessary for attribute *bindings*. Static ARIA attributes require no extra syntax. > > > ``` > <!-- Static ARIA attributes require no extra syntax --> > <button aria-label="Save document">…</button> > ``` > > By convention, HTML attributes use lowercase names (`tabindex`), while properties use camelCase names (`tabIndex`). > > See the [Binding syntax](binding-syntax#html-attribute-vs-dom-property) guide for more background on the difference between attributes and properties. > > Angular UI components --------------------- The [Angular Material](https://material.angular.io) library, which is maintained by the Angular team, is a suite of reusable UI components that aims to be fully accessible. The [Component Development Kit (CDK)](https://material.angular.io/cdk/categories) includes the `a11y` package that provides tools to support various areas of accessibility. For example: * `LiveAnnouncer` is used to announce messages for screen-reader users using an `aria-live` region. See the W3C documentation for more information on [aria-live regions](https://www.w3.org/WAI/PF/aria-1.1/states_and_properties#aria-live). * The `cdkTrapFocus` directive traps Tab-key focus within an element. Use it to create accessible experience for components such as modal dialogs, where focus must be constrained. For full details of these and other tools, see the [Angular CDK accessibility overview](https://material.angular.io/cdk/a11y/overview). ### Augmenting native elements Native HTML elements capture several standard interaction patterns that are important to accessibility. When authoring Angular components, you should re-use these native elements directly when possible, rather than re-implementing well-supported behaviors. For example, instead of creating a custom element for a new variety of button, create a component that uses an attribute selector with a native `<button>` element. This most commonly applies to `<button>` and `<a>`, but can be used with many other types of element. You can see examples of this pattern in Angular Material: [`MatButton`](https://github.com/angular/components/blob/50d3f29b6dc717b512dbd0234ce76f4ab7e9762a/src/material/button/button.ts#L67-L69), [`MatTabNav`](https://github.com/angular/components/blob/50d3f29b6dc717b512dbd0234ce76f4ab7e9762a/src/material/tabs/tab-nav-bar/tab-nav-bar.ts#L139), and [`MatTable`](https://github.com/angular/components/blob/50d3f29b6dc717b512dbd0234ce76f4ab7e9762a/src/material/table/table.ts#L22). ### Using containers for native elements Sometimes using the appropriate native element requires a container element. For example, the native `<input>` element cannot have children, so any custom text entry components need to wrap an `<input>` with extra elements. By just including `<input>` in your custom component's template, it's impossible for your component's users to set arbitrary properties and attributes to the `<input>` element. Instead, create a container component that uses content projection to include the native control in the component's API. You can see [`MatFormField`](https://material.angular.io/components/form-field/overview) as an example of this pattern. Case study: Building a custom progress bar ------------------------------------------ The following example shows how to make a progress bar accessible by using host binding to control accessibility-related attributes. * The component defines an accessibility-enabled element with both the standard HTML attribute `role`, and ARIA attributes. The ARIA attribute `aria-valuenow` is bound to the user's input. ``` import { Component, Input } from '@angular/core'; /** * Example progressbar component. */ @Component({ selector: 'app-example-progressbar', template: '<div class="bar" [style.width.%]="value"></div>', styleUrls: ['./progress-bar.component.css'], host: { // Sets the role for this component to "progressbar" role: 'progressbar', // Sets the minimum and maximum values for the progressbar role. 'aria-valuemin': '0', 'aria-valuemax': '100', // Binding that updates the current value of the progressbar. '[attr.aria-valuenow]': 'value', } }) export class ExampleProgressbarComponent { /** Current value of the progressbar. */ @Input() value = 0; } ``` * In the template, the `aria-label` attribute ensures that the control is accessible to screen readers. ``` <label> Enter an example progress value <input type="number" min="0" max="100" [value]="progress" (input)="setProgress($event)"> </label> <!-- The user of the progressbar sets an aria-label to communicate what the progress means. --> <app-example-progressbar [value]="progress" aria-label="Example of a progress bar"> </app-example-progressbar> ``` Routing ------- ### Focus management after navigation Tracking and controlling [focus](https://developers.google.com/web/fundamentals/accessibility/focus) in a UI is an important consideration in designing for accessibility. When using Angular routing, you should decide where page focus goes upon navigation. To avoid relying solely on visual cues, you need to make sure your routing code updates focus after page navigation. Use the `[NavigationEnd](../api/router/navigationend)` event from the `[Router](../api/router/router)` service to know when to update focus. The following example shows how to find and focus the main content header in the DOM after navigation. ``` router.events.pipe(filter(e => e instanceof NavigationEnd)).subscribe(() => { const mainHeader = document.querySelector('#main-content-header') if (mainHeader) { mainHeader.focus(); } }); ``` In a real application, the element that receives focus depends on your specific application structure and layout. The focused element should put users in a position to immediately move into the main content that has just been routed into view. You should avoid situations where focus returns to the `body` element after a route change. ### Active links identification CSS classes applied to active `[RouterLink](../api/router/routerlink)` elements, such as `[RouterLinkActive](../api/router/routerlinkactive)`, provide a visual cue to identify the active link. Unfortunately, a visual cue doesn't help blind or visually impaired users. Applying the `aria-current` attribute to the element can help identify the active link. For more information, see [Mozilla Developer Network (MDN) aria-current](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Attributes/aria-current)). The `[RouterLinkActive](../api/router/routerlinkactive)` directive provides the `ariaCurrentWhenActive` input which sets the `aria-current` to a specified value when the link becomes active. The following example shows how to apply the `active-page` class to active links as well as setting their `aria-current` attribute to `"page"` when they are active: ``` <nav> <a routerLink="home" routerLinkActive="active-page" ariaCurrentWhenActive="page"> Home </a> <a routerLink="about" routerLinkActive="active-page" ariaCurrentWhenActive="page"> About </a> <a routerLink="shop" routerLinkActive="active-page" ariaCurrentWhenActive="page"> Shop </a> </nav> ``` More information ---------------- * [Accessibility - Google Web Fundamentals](https://developers.google.com/web/fundamentals/accessibility) * [ARIA specification and authoring practices](https://www.w3.org/TR/wai-aria) * [Material Design - Accessibility](https://material.io/design/usability/accessibility.html) * [Smashing Magazine](https://www.smashingmagazine.com/search/?q=accessibility) * [Inclusive Components](https://inclusive-components.design) * [Accessibility Resources and Code Examples](https://dequeuniversity.com/resources) * [W3C - Web Accessibility Initiative](https://www.w3.org/WAI/people-use-web) * [Rob Dodson A11ycasts](https://www.youtube.com/watch?v=HtTyRajRuyY) * [Angular ESLint](https://github.com/angular-eslint/angular-eslint#functionality) provides linting rules that can help you make sure your code meets accessibility standards. Books * "A Web for Everyone: Designing Accessible User Experiences," Sarah Horton and Whitney Quesenbery * "Inclusive Design Patterns," Heydon Pickering Last reviewed on Mon Feb 28 2022
programming_docs
angular Ahead-of-time (AOT) compilation Ahead-of-time (AOT) compilation =============================== An Angular application consists mainly of components and their HTML templates. Because the components and templates provided by Angular cannot be understood by the browser directly, Angular applications require a compilation process before they can run in a browser. The Angular [ahead-of-time (AOT) compiler](glossary#aot) converts your Angular HTML and TypeScript code into efficient JavaScript code during the build phase *before* the browser downloads and runs that code. Compiling your application during the build process provides a faster rendering in the browser. This guide explains how to specify metadata and apply available compiler options to compile your applications efficiently using the AOT compiler. > [Watch Alex Rickabaugh explain the Angular compiler](https://www.youtube.com/watch?v=anphffaCZrQ) at AngularConnect 2019. > > Here are some reasons you might want to use AOT. | Reasons | Details | | --- | --- | | Faster rendering | With AOT, the browser downloads a pre-compiled version of the application. The browser loads executable code so it can render the application immediately, without waiting to compile the application first. | | Fewer asynchronous requests | The compiler *inlines* external HTML templates and CSS style sheets within the application JavaScript, eliminating separate ajax requests for those source files. | | Smaller Angular framework download size | There's no need to download the Angular compiler if the application is already compiled. The compiler is roughly half of Angular itself, so omitting it dramatically reduces the application payload. | | Detect template errors earlier | The AOT compiler detects and reports template binding errors during the build step before users can see them. | | Better security | AOT compiles HTML templates and components into JavaScript files long before they are served to the client. With no templates to read and no risky client-side HTML or JavaScript evaluation, there are fewer opportunities for injection attacks. | Choosing a compiler ------------------- Angular offers two ways to compile your application: | Angular compile | Details | | --- | --- | | Just-in-Time (JIT) | Compiles your application in the browser at runtime. This was the default until Angular 8. | | Ahead-of-Time (AOT) | Compiles your application and libraries at build time. This is the default starting in Angular 9. | When you run the [`ng build`](cli/build) (build only) or [`ng serve`](cli/serve) (build and serve locally) CLI commands, the type of compilation (JIT or AOT) depends on the value of the `aot` property in your build configuration specified in `angular.json`. By default, `aot` is set to `true` for new CLI applications. See the [CLI command reference](cli) and [Building and serving Angular apps](build) for more information. How AOT works ------------- The Angular AOT compiler extracts **metadata** to interpret the parts of the application that Angular is supposed to manage. You can specify the metadata explicitly in **decorators** such as `@[Component](../api/core/component)()` and `@[Input](../api/core/input)()`, or implicitly in the constructor declarations of the decorated classes. The metadata tells Angular how to construct instances of your application classes and interact with them at runtime. In the following example, the `@[Component](../api/core/component)()` metadata object and the class constructor tell Angular how to create and display an instance of `TypicalComponent`. ``` @Component({ selector: 'app-typical', template: '<div>A typical component for {{data.name}}</div>' }) export class TypicalComponent { @Input() data: TypicalData; constructor(private someService: SomeService) { … } } ``` The Angular compiler extracts the metadata *once* and generates a *factory* for `TypicalComponent`. When it needs to create a `TypicalComponent` instance, Angular calls the factory, which produces a new visual element, bound to a new instance of the component class with its injected dependency. ### Compilation phases There are three phases of AOT compilation. | | Phase | Details | | --- | --- | --- | | 1 | code analysis | In this phase, the TypeScript compiler and *AOT collector* create a representation of the source. The collector does not attempt to interpret the metadata it collects. It represents the metadata as best it can and records errors when it detects a metadata syntax violation. | | 2 | code generation | In this phase, the compiler's `StaticReflector` interprets the metadata collected in phase 1, performs additional validation of the metadata, and throws an error if it detects a metadata restriction violation. | | 3 | template type checking | In this optional phase, the Angular *template compiler* uses the TypeScript compiler to validate the binding expressions in templates. You can enable this phase explicitly by setting the `fullTemplateTypeCheck` configuration option; see [Angular compiler options](angular-compiler-options). | ### Metadata restrictions You write metadata in a *subset* of TypeScript that must conform to the following general constraints: * Limit [expression syntax](aot-compiler#expression-syntax) to the supported subset of JavaScript * Only reference exported symbols after [code folding](aot-compiler#code-folding) * Only call [functions supported](aot-compiler#supported-functions) by the compiler * Decorated and data-bound class members must be public For additional guidelines and instructions on preparing an application for AOT compilation, see [Angular: Writing AOT-friendly applications](https://medium.com/sparkles-blog/angular-writing-aot-friendly-applications-7b64c8afbe3f). > Errors in AOT compilation commonly occur because of metadata that does not conform to the compiler's requirements (as described more fully below). For help in understanding and resolving these problems, see [AOT Metadata Errors](aot-metadata-errors). > > ### Configuring AOT compilation You can provide options in the [TypeScript configuration file](typescript-configuration) that controls the compilation process. See [Angular compiler options](angular-compiler-options) for a complete list of available options. Phase 1: Code analysis ---------------------- The TypeScript compiler does some of the analytic work of the first phase. It emits the `.d.ts` *type definition files* with type information that the AOT compiler needs to generate application code. At the same time, the AOT **collector** analyzes the metadata recorded in the Angular decorators and outputs metadata information in **`.metadata.json`** files, one per `.d.ts` file. You can think of `.metadata.json` as a diagram of the overall structure of a decorator's metadata, represented as an [abstract syntax tree (AST)](https://en.wikipedia.org/wiki/Abstract_syntax_tree). > Angular's [schema.ts](https://github.com/angular/angular/blob/main/packages/compiler-cli/src/metadata/schema.ts) describes the JSON format as a collection of TypeScript interfaces. > > ### Expression syntax limitations The AOT collector only understands a subset of JavaScript. Define metadata objects with the following limited syntax: | Syntax | Example | | --- | --- | | Literal object | `{cherry: true, apple: true, mincemeat: false}` | | Literal array | `['cherries', 'flour', 'sugar']` | | Spread in literal array | `['apples', 'flour', ...]` | | Calls | `bake(ingredients)` | | New | `new Oven()` | | Property access | `pie.slice` | | Array index | `ingredients[0]` | | Identity reference | `[Component](../api/core/component)` | | A template string | ``pie is ${multiplier} times better than cake`` | | Literal string | `'pi'` | | Literal number | `3.14153265` | | Literal boolean | `true` | | Literal null | `null` | | Supported prefix operator | `!cake` | | Supported binary operator | `a+b` | | Conditional operator | `a ? b : c` | | Parentheses | `(a+b)` | If an expression uses unsupported syntax, the collector writes an error node to the `.metadata.json` file. The compiler later reports the error if it needs that piece of metadata to generate the application code. > If you want `ngc` to report syntax errors immediately rather than produce a `.metadata.json` file with errors, set the `strictMetadataEmit` option in the TypeScript configuration file. > > > ``` > "angularCompilerOptions": { > … > "strictMetadataEmit" : true > } > ``` > Angular libraries have this option to ensure that all Angular `.metadata.json` files are clean and it is a best practice to do the same when building your own libraries. > > ### No arrow functions The AOT compiler does not support [function expressions](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators/function) and [arrow functions](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Functions/Arrow_functions), also called *lambda* functions. Consider the following component decorator: ``` @Component({ … providers: [{provide: server, useFactory: () => new Server()}] }) ``` The AOT collector does not support the arrow function, `() => new Server()`, in a metadata expression. It generates an error node in place of the function. When the compiler later interprets this node, it reports an error that invites you to turn the arrow function into an *exported function*. You can fix the error by converting to this: ``` export function serverFactory() { return new Server(); } @Component({ … providers: [{provide: server, useFactory: serverFactory}] }) ``` In version 5 and later, the compiler automatically performs this rewriting while emitting the `.js` file. ### Code folding The compiler can only resolve references to ***exported*** symbols. The collector, however, can evaluate an expression during collection and record the result in the `.metadata.json`, rather than the original expression. This allows you to make limited use of non-exported symbols within expressions. For example, the collector can evaluate the expression `1 + 2 + 3 + 4` and replace it with the result, `10`. This process is called *folding*. An expression that can be reduced in this manner is *foldable*. The collector can evaluate references to module-local `const` declarations and initialized `var` and `let` declarations, effectively removing them from the `.metadata.json` file. Consider the following component definition: ``` const template = '<div>{{hero.name}}</div>'; @Component({ selector: 'app-hero', template: template }) export class HeroComponent { @Input() hero: Hero; } ``` The compiler could not refer to the `template` constant because it isn't exported. The collector, however, can fold the `template` constant into the metadata definition by in-lining its contents. The effect is the same as if you had written: ``` @Component({ selector: 'app-hero', template: '<div>{{hero.name}}</div>' }) export class HeroComponent { @Input() hero: Hero; } ``` There is no longer a reference to `template` and, therefore, nothing to trouble the compiler when it later interprets the *collector's* output in `.metadata.json`. You can take this example a step further by including the `template` constant in another expression: ``` const template = '<div>{{hero.name}}</div>'; @Component({ selector: 'app-hero', template: template + '<div>{{hero.title}}</div>' }) export class HeroComponent { @Input() hero: Hero; } ``` The collector reduces this expression to its equivalent *folded* string: ``` '<div>{{hero.name}}</div><div>{{hero.title}}</div>' ``` #### Foldable syntax The following table describes which expressions the collector can and cannot fold: | Syntax | Foldable | | --- | --- | | Literal object | yes | | Literal array | yes | | Spread in literal array | no | | Calls | no | | New | no | | Property access | yes, if target is foldable | | Array index | yes, if target and index are foldable | | Identity reference | yes, if it is a reference to a local | | A template with no substitutions | yes | | A template with substitutions | yes, if the substitutions are foldable | | Literal string | yes | | Literal number | yes | | Literal boolean | yes | | Literal null | yes | | Supported prefix operator | yes, if operand is foldable | | Supported binary operator | yes, if both left and right are foldable | | Conditional operator | yes, if condition is foldable | | Parentheses | yes, if the expression is foldable | If an expression is not foldable, the collector writes it to `.metadata.json` as an [AST](https://en.wikipedia.org/wiki/Abstract*syntax*tree) for the compiler to resolve. Phase 2: code generation ------------------------ The collector makes no attempt to understand the metadata that it collects and outputs to `.metadata.json`. It represents the metadata as best it can and records errors when it detects a metadata syntax violation. It's the compiler's job to interpret the `.metadata.json` in the code generation phase. The compiler understands all syntax forms that the collector supports, but it may reject *syntactically* correct metadata if the *semantics* violate compiler rules. ### Public symbols The compiler can only reference *exported symbols*. * Decorated component class members must be public. You cannot make an `@[Input](../api/core/input)()` property private or protected. * Data bound properties must also be public ### Supported classes and functions The collector can represent a function call or object creation with `new` as long as the syntax is valid. The compiler, however, can later refuse to generate a call to a *particular* function or creation of a *particular* object. The compiler can only create instances of certain classes, supports only core decorators, and only supports calls to macros (functions or static methods) that return expressions. | Compiler action | Details | | --- | --- | | New instances | The compiler only allows metadata that create instances of the class `[InjectionToken](../api/core/injectiontoken)` from `@angular/core`. | | Supported decorators | The compiler only supports metadata for the [Angular decorators in the `@angular/core` module](../api/core#decorators). | | Function calls | Factory functions must be exported, named functions. The AOT compiler does not support lambda expressions ("arrow functions") for factory functions. | ### Functions and static method calls The collector accepts any function or static method that contains a single `return` statement. The compiler, however, only supports macros in the form of functions or static methods that return an *expression*. For example, consider the following function: ``` export function wrapInArray<T>(value: T): T[] { return [value]; } ``` You can call the `wrapInArray` in a metadata definition because it returns the value of an expression that conforms to the compiler's restrictive JavaScript subset. You might use `wrapInArray()` like this: ``` @NgModule({ declarations: wrapInArray(TypicalComponent) }) export class TypicalModule {} ``` The compiler treats this usage as if you had written: ``` @NgModule({ declarations: [TypicalComponent] }) export class TypicalModule {} ``` The Angular [`RouterModule`](../api/router/routermodule) exports two macro static methods, `forRoot` and `forChild`, to help declare root and child routes. Review the [source code](https://github.com/angular/angular/blob/main/packages/router/src/router_module.ts#L139 "RouterModule.forRoot source code") for these methods to see how macros can simplify configuration of complex [NgModules](ngmodules). ### Metadata rewriting The compiler treats object literals containing the fields `useClass`, `useValue`, `useFactory`, and `data` specially, converting the expression initializing one of these fields into an exported variable that replaces the expression. This process of rewriting these expressions removes all the restrictions on what can be in them because the compiler doesn't need to know the expression's value —it just needs to be able to generate a reference to the value. You might write something like: ``` class TypicalServer { } @NgModule({ providers: [{provide: SERVER, useFactory: () => TypicalServer}] }) export class TypicalModule {} ``` Without rewriting, this would be invalid because lambdas are not supported and `TypicalServer` is not exported. To allow this, the compiler automatically rewrites this to something like: ``` class TypicalServer { } export const θ0 = () => new TypicalServer(); @NgModule({ providers: [{provide: SERVER, useFactory: θ0}] }) export class TypicalModule {} ``` This allows the compiler to generate a reference to `θ0` in the factory without having to know what the value of `θ0` contains. The compiler does the rewriting during the emit of the `.js` file. It does not, however, rewrite the `.d.ts` file, so TypeScript doesn't recognize it as being an export. And it does not interfere with the ES module's exported API. Phase 3: Template type checking ------------------------------- One of the Angular compiler's most helpful features is the ability to type-check expressions within templates, and catch any errors before they cause crashes at runtime. In the template type-checking phase, the Angular template compiler uses the TypeScript compiler to validate the binding expressions in templates. Enable this phase explicitly by adding the compiler option `"fullTemplateTypeCheck"` in the `"angularCompilerOptions"` of the project's TypeScript configuration file (see [Angular Compiler Options](angular-compiler-options)). Template validation produces error messages when a type error is detected in a template binding expression, similar to how type errors are reported by the TypeScript compiler against code in a `.ts` file. For example, consider the following component: ``` @Component({ selector: 'my-component', template: '{{person.addresss.street}}' }) class MyComponent { person?: Person; } ``` This produces the following error: ``` my.component.ts.MyComponent.html(1,1): : Property 'addresss' does not exist on type 'Person'. Did you mean 'address'? ``` The file name reported in the error message, `my.component.ts.MyComponent.html`, is a synthetic file generated by the template compiler that holds contents of the `MyComponent` class template. The compiler never writes this file to disk. The line and column numbers are relative to the template string in the `@[Component](../api/core/component)` annotation of the class, `MyComponent` in this case. If a component uses `templateUrl` instead of `template`, the errors are reported in the HTML file referenced by the `templateUrl` instead of a synthetic file. The error location is the beginning of the text node that contains the interpolation expression with the error. If the error is in an attribute binding such as `[value]="person.address.street"`, the error location is the location of the attribute that contains the error. The validation uses the TypeScript type checker and the options supplied to the TypeScript compiler to control how detailed the type validation is. For example, if the `strictTypeChecks` is specified, the error ``` my.component.ts.MyComponent.html(1,1): : Object is possibly 'undefined' ``` is reported as well as the above error message. ### Type narrowing The expression used in an `[ngIf](../api/common/ngif)` directive is used to narrow type unions in the Angular template compiler, the same way the `if` expression does in TypeScript. For example, to avoid `Object is possibly 'undefined'` error in the template above, modify it to only emit the interpolation if the value of `person` is initialized as shown below: ``` @Component({ selector: 'my-component', template: ' {{person.address.street}} ' }) class MyComponent { person?: Person; } ``` Using `*[ngIf](../api/common/ngif)` allows the TypeScript compiler to infer that the `person` used in the binding expression will never be `undefined`. For more information about input type narrowing, see [Improving template type checking for custom directives](structural-directives#directive-type-checks). ### Non-null type assertion operator Use the [non-null type assertion operator](template-expression-operators#non-null-assertion-operator) to suppress the `Object is possibly 'undefined'` error when it is inconvenient to use `*[ngIf](../api/common/ngif)` or when some constraint in the component ensures that the expression is always non-null when the binding expression is interpolated. In the following example, the `person` and `address` properties are always set together, implying that `address` is always non-null if `person` is non-null. There is no convenient way to describe this constraint to TypeScript and the template compiler, but the error is suppressed in the example by using `address!.street`. ``` @Component({ selector: 'my-component', template: '<span *ngIf="person"> {{person.name}} lives on {{address!.street}} </span>' }) class MyComponent { person?: Person; address?: Address; setData(person: Person, address: Address) { this.person = person; this.address = address; } } ``` The non-null assertion operator should be used sparingly as refactoring of the component might break this constraint. In this example it is recommended to include the checking of `address` in the `*[ngIf](../api/common/ngif)` as shown below: ``` @Component({ selector: 'my-component', template: '<span *ngIf="person && address"> {{person.name}} lives on {{address.street}} </span>' }) class MyComponent { person?: Person; address?: Address; setData(person: Person, address: Address) { this.person = person; this.address = address; } } ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular Transforming data with parameters and chained pipes Transforming data with parameters and chained pipes =================================================== Use optional parameters to fine-tune a pipe's output. For example, use the [`CurrencyPipe`](../api/common/currencypipe "API reference") with a country code such as EUR as a parameter. The template expression `{{ amount | [currency](../api/common/currencypipe):'EUR' }}` transforms the `amount` to currency in euros. Follow the pipe name (`[currency](../api/common/currencypipe)`) with a colon (`:`) and the parameter value (`'EUR'`). If the pipe accepts multiple parameters, separate the values with colons. For example, `{{ amount | [currency](../api/common/currencypipe):'EUR':'Euros '}}` adds the second parameter, the string literal `'Euros '`, to the output string. Use any valid template expression as a parameter, such as a string literal or a component property. Some pipes require at least one parameter and allow more optional parameters, such as [`SlicePipe`](../api/common/slicepipe "API reference for SlicePipe"). For example, `{{ slice:1:5 }}` creates a new array or string containing a subset of the elements starting with element `1` and ending with element `5`. Example: Formatting a date -------------------------- The tabs in the following example demonstrates toggling between two different formats (`'shortDate'` and `'fullDate'`): * The `app.component.html` template uses a format parameter for the [`DatePipe`](../api/common/datepipe) (named `[date](../api/common/datepipe)`) to show the date as **04/15/88**. * The `hero-birthday2.component.ts` component binds the pipe's format parameter to the component's `format` property in the `template` section, and adds a button for a click event bound to the component's `toggleFormat()` method. * The `hero-birthday2.component.ts` component's `toggleFormat()` method toggles the component's `format` property between a short form (`'shortDate'`) and a longer form (`'fullDate'`). ``` <p>The hero's birthday is {{ birthday | date:"MM/dd/yy" }} </p> ``` ``` template: ` <p>The hero's birthday is {{ birthday | date:format }}</p> <button type="button" (click)="toggleFormat()">Toggle Format</button> ` ``` ``` export class HeroBirthday2Component { birthday = new Date(1988, 3, 15); // April 15, 1988 -- since month parameter is zero-based toggle = true; // start with true == shortDate get format() { return this.toggle ? 'shortDate' : 'fullDate'; } toggleFormat() { this.toggle = !this.toggle; } } ``` Clicking the **Toggle Format** button alternates the date format between **04/15/1988** and **Friday, April 15, 1988**. > For `[date](../api/common/datepipe)` pipe format options, see [DatePipe](../api/common/datepipe "DatePipe API Reference page"). > > Example: Applying two formats by chaining pipes ----------------------------------------------- Chain pipes so that the output of one pipe becomes the input to the next. In the following example, chained pipes first apply a format to a date value, then convert the formatted date to uppercase characters. The first tab for the `src/app/app.component.html` template chains `[DatePipe](../api/common/datepipe)` and `[UpperCasePipe](../api/common/uppercasepipe)` to display the birthday as **APR 15, 1988**. The second tab for the `src/app/app.component.html` template passes the `fullDate` parameter to `[date](../api/common/datepipe)` before chaining to `[uppercase](../api/common/uppercasepipe)`, which produces **FRIDAY, APRIL 15, 1988**. ``` The chained hero's birthday is {{ birthday | date | uppercase}} ``` ``` The chained hero's birthday is {{ birthday | date:'fullDate' | uppercase}} ``` Last reviewed on Fri Apr 01 2022 angular Deprecated APIs and features Deprecated APIs and features ============================ Angular strives to balance innovation and stability. Sometimes, APIs and features become obsolete and need to be removed or replaced so that Angular can stay current with new best practices, changing dependencies, or changes in the (web) platform itself. To make these transitions as easy as possible, APIs and features are deprecated for a period of time before they are removed. This gives you time to update your applications to the latest APIs and best practices. This guide contains a summary of all Angular APIs and features that are currently deprecated. > Features and APIs that were deprecated in v6 or earlier are candidates for removal in version 9 or any later major version. For information about Angular's deprecation and removal practices, see [Angular Release Practices](releases#deprecation-practices "Angular Release Practices: Deprecation practices"). > > For step-by-step instructions on how to update to the latest Angular release, use the interactive update guide at [update.angular.io](https://update.angular.io). > > Index ----- To help you future-proof your projects, the following table lists all deprecated APIs and features, organized by the release in which they are candidates for removal. Each item is linked to the section later in this guide that describes the deprecation reason and replacement options. ### Deprecated features that can be removed in v11 or later | Area | API or Feature | Deprecated in | May be removed in | | --- | --- | --- | --- | | `@angular/common` | [`ReflectiveInjector`](deprecations#reflectiveinjector) | v8 | v11 | | `@angular/core` | [`DefaultIterableDiffer`](deprecations#core) | v7 | v11 | | `@angular/core` | [`ReflectiveKey`](deprecations#core) | v8 | v11 | | `@angular/core` | [`RenderComponentType`](deprecations#core) | v7 | v11 | | `@angular/core` | [`defineInjectable`](deprecations#core) | v8 | v11 | | `@angular/core` | [`entryComponents`](../api/core/ngmodule#entryComponents) | v9 | v11 | | `@angular/core` | [`ANALYZE_FOR_ENTRY_COMPONENTS`](../api/core/analyze_for_entry_components) | v9 | v11 | | `@angular/forms` | [`ngModel` with reactive forms](deprecations#ngmodel-reactive) | v6 | v11 | | `@angular/upgrade` | [`@angular/upgrade`](deprecations#upgrade) | v8 | v11 | | `@angular/upgrade` | [`getAngularLib`](deprecations#upgrade-static) | v8 | v11 | | `@angular/upgrade` | [`setAngularLib`](deprecations#upgrade-static) | v8 | v11 | | polyfills | [reflect-metadata](deprecations#reflect-metadata) | v8 | v11 | | template syntax | [`<template>`](deprecations#template-tag) | v7 | v11 | ### Deprecated features that can be removed in v12 or later | Area | API or Feature | Deprecated in | May be removed in | | --- | --- | --- | --- | | `@angular/core/testing` | [`TestBed.get`](deprecations#testing) | v9 | v12 | | `@angular/core/testing` | [`async`](deprecations#testing) | v9 | v12 | ### Deprecated features that can be removed in v14 or later | Area | API or Feature | Deprecated in | May be removed in | | --- | --- | --- | --- | | `@angular/forms` | [`FormBuilder.group` legacy options parameter](../api/forms/formbuilder#group) | v11 | v14 | ### Deprecated features that can be removed in v15 or later | Area | API or Feature | Deprecated in | May be removed in | | --- | --- | --- | --- | | `@angular/common/[http](../api/common/http)` | [`XhrFactory`](../api/common/http/xhrfactory) | v12 | v15 | | `@angular/compiler-cli` | [Input setter coercion](deprecations#input-setter-coercion) | v13 | v15 | | `@angular/compiler-cli` | [`fullTemplateTypeCheck`](deprecations#full-template-type-check) | v13 | v15 | | `@angular/core` | [Factory-based signature of `ApplicationRef.bootstrap`](deprecations#core) | v13 | v15 | | `@angular/core` | [`PlatformRef.bootstrapModuleFactory`](deprecations#core) | v13 | v15 | | `@angular/core` | [Factory-based signature of `ViewContainerRef.createComponent`](../api/core/viewcontainerref#createComponent) | v13 | v15 | | `@angular/platform-server` | [`renderModuleFactory`](deprecations#platform-server) | v13 | v15 | | `@angular/upgrade` | [Factory-based signature of `downgradeModule`](deprecations#upgrade-static) | v13 | v15 | | template syntax | [`bind-`, `on-`, `bindon-`, and `ref-`](deprecations#bind-syntax) | v13 | v15 | ### Deprecated features that can be removed in v16 or later | Area | API or Feature | Deprecated in | May be removed in | | --- | --- | --- | --- | | `@angular/common/[http](../api/common/http)/testing` | [`TestRequest` accepting `ErrorEvent` for error simulation](deprecations#testrequest-errorevent) | v13 | v16 | | `@angular/core` | [`getModuleFactory`](deprecations#core) | v13 | v16 | | `@angular/core` | [`ModuleWithComponentFactories`](deprecations#core) | v13 | v16 | | `@angular/core` | [`Compiler`](deprecations#core) | v13 | v16 | | `@angular/core` | [`CompilerFactory`](deprecations#core) | v13 | v16 | | `@angular/core` | [`NgModuleFactory`](deprecations#core) | v13 | v16 | | `@angular/core` | [`ComponentFactory`](deprecations#core) | v13 | v16 | | `@angular/core` | [`ComponentFactoryResolver`](deprecations#core) | v13 | v16 | | `@angular/core` | [`CompilerOptions.useJit and CompilerOptions.missingTranslation config options`](deprecations#core) | v13 | v16 | | `@angular/platform-browser` | [`BrowserTransferStateModule`](deprecations#platform-browser) | v14 | v16 | | `@angular/platform-browser-dynamic` | [`JitCompilerFactory`](deprecations#platform-browser-dynamic) | v13 | v16 | | `@angular/platform-browser-dynamic` | [`RESOURCE_CACHE_PROVIDER`](deprecations#platform-browser-dynamic) | v13 | v16 | | `@angular/platform-server` | [`ServerTransferStateModule`](deprecations#platform-server) | v14 | v16 | | `@angular/router` | [`relativeLinkResolution`](deprecations#relativeLinkResolution) | v14 | v16 | | `@angular/router` | [`resolver` argument in `RouterOutletContract.activateWith`](deprecations#router) | v14 | v16 | | `@angular/router` | [`resolver` field of the `OutletContext` class](deprecations#router) | v14 | v16 | | `@angular/service-worker` | [`SwUpdate#activated`](../api/service-worker/swupdate#activated) | v13 | v16 | | `@angular/service-worker` | [`SwUpdate#available`](../api/service-worker/swupdate#available) | v13 | v16 | ### Deprecated features that can be removed in v17 or later | Area | API or Feature | Deprecated in | May be removed in | | --- | --- | --- | --- | | `@angular/common` | [`NgComponentOutlet.ngComponentOutletNgModuleFactory`](deprecations#common) | v14 | v17 | | `@angular/common` | [`DatePipe` - `DATE_PIPE_DEFAULT_TIMEZONE`](../api/common/date_pipe_default_timezone) | v15 | v17 | | `@angular/core` | NgModule and `'any'` options for [`providedIn`](deprecations#core) | v15 | v17 | | `@angular/router` | [`RouterLinkWithHref` directive](deprecations#router) | v15 | v17 | | `@angular/router` | [Router writeable properties](deprecations#router-writable-properties) | v15.1 | v17 | | `@angular/router` | [Router CanLoad guards](deprecations#router-can-load) | v15.1 | v17 | ### Deprecated features with no planned removal version | Area | API or Feature | Deprecated in | May be removed in | | --- | --- | --- | --- | | template syntax | [`/deep/`, `>>>`, and `::ng-deep`](deprecations#deep-component-style-selector) | v7 | unspecified | For information about Angular Component Development Kit (CDK) and Angular Material deprecations, see the [changelog](https://github.com/angular/components/blob/main/CHANGELOG.md). Deprecated APIs --------------- This section contains a complete list all deprecated APIs, with details to help you plan your migration to a replacement. > **TIP**: In the [API reference section](api) of this site, deprecated APIs are indicated by ~~strikethrough.~~ You can filter the API list by [Status: deprecated](api?status=deprecated). > > ### @angular/common | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`NgComponentOutlet.ngComponentOutletNgModuleFactory`](../api/common/ngcomponentoutlet) | `[NgComponentOutlet.ngComponentOutletNgModule](../api/common/ngcomponentoutlet#ngComponentOutletNgModule)` | v14 | Use the `ngComponentOutletNgModule` input instead. This input doesn't require resolving NgModule factory. | | [`DatePipe` - `DATE_PIPE_DEFAULT_TIMEZONE`](../api/common/date_pipe_default_timezone) | `{ provide: [DATE\_PIPE\_DEFAULT\_OPTIONS](../api/common/date_pipe_default_options), useValue: { timezone: '-1200' }` | v15 | Use the `[DATE\_PIPE\_DEFAULT\_OPTIONS](../api/common/date_pipe_default_options)` injection token, which can configure multiple settings at once instead. | ### @angular/common/http | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`XhrFactory`](../api/common/http/xhrfactory) | `[XhrFactory](../api/common/xhrfactory)` in `@angular/common` | v12 | The `[XhrFactory](../api/common/xhrfactory)` has moved from `@angular/common/[http](../api/common/http)` to `@angular/common`. | ### @angular/core | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`DefaultIterableDiffer`](../api/core/defaultiterablediffer) | n/a | v4 | Not part of public API. | | [`ReflectiveInjector`](../api/core/reflectiveinjector) | [`Injector.create()`](../api/core/injector#create) | v5 | See [`ReflectiveInjector`](deprecations#reflectiveinjector) | | [`ReflectiveKey`](../api/core/reflectivekey) | none | v5 | none | | [`defineInjectable`](../api/core/defineinjectable) | `ɵɵ[defineInjectable](../api/core/defineinjectable)` | v8 | Used only in generated code. No source code should depend on this API. | | [`entryComponents`](../api/core/ngmodule#entryComponents) | none | v9 | See [`entryComponents`](deprecations#entryComponents) | | [`ANALYZE_FOR_ENTRY_COMPONENTS`](../api/core/analyze_for_entry_components) | none | v9 | See [`ANALYZE_FOR_ENTRY_COMPONENTS`](deprecations#entryComponents) | | [`async`](../api/core/testing/async) | [`waitForAsync`](../api/core/testing/waitforasync) | v11 | The [`async`](../api/core/testing/async) function from `@angular/core/testing` has been renamed to `[waitForAsync](../api/core/testing/waitforasync)` in order to avoid confusion with the native JavaScript `[async](../api/common/asyncpipe)` syntax. The existing function is deprecated and can be removed in a future version. | | [`getModuleFactory`](../api/core/getmodulefactory) | [`getNgModuleById`](../api/core/getngmodulebyid) | v13 | Ivy allows working with NgModule classes directly, without retrieving corresponding factories. | | `ViewChildren.emitDistinctChangesOnly` / `ContentChildren.emitDistinctChangesOnly` | none (was part of [issue #40091](https://github.com/angular/angular/issues/40091)) | | This is a temporary flag introduced as part of bug fix of [issue #40091](https://github.com/angular/angular/issues/40091) and will be removed. | | Factory-based signature of [`ApplicationRef.bootstrap`](../api/core/applicationref#bootstrap) | Type-based signature of [`ApplicationRef.bootstrap`](../api/core/applicationref#bootstrap) | v13 | With Ivy, there is no need to resolve Component factory and Component Type can be provided directly. | | [`PlatformRef.bootstrapModuleFactory`](../api/core/platformref#bootstrapModuleFactory) | [`PlatformRef.bootstrapModule`](../api/core/platformref#bootstrapModule) | v13 | With Ivy, there is no need to resolve NgModule factory and NgModule Type can be provided directly. | | [`ModuleWithComponentFactories`](../api/core/modulewithcomponentfactories) | none | v13 | Ivy JIT mode doesn't require accessing this symbol. See [JIT API changes due to ViewEngine deprecation](deprecations#jit-api-changes) for additional context. | | [`Compiler`](../api/core/compiler) | none | v13 | Ivy JIT mode doesn't require accessing this symbol. See [JIT API changes due to ViewEngine deprecation](deprecations#jit-api-changes) for additional context. | | [`CompilerFactory`](../api/core/compilerfactory) | none | v13 | Ivy JIT mode doesn't require accessing this symbol. See [JIT API changes due to ViewEngine deprecation](deprecations#jit-api-changes) for additional context. | | [`NgModuleFactory`](../api/core/ngmodulefactory) | Use non-factory based framework APIs like [PlatformRef.bootstrapModule](../api/core/platformref#bootstrapModule) and [createNgModule](../api/core/createngmodule) | v13 | Ivy JIT mode doesn't require accessing this symbol. See [JIT API changes due to ViewEngine deprecation](deprecations#jit-api-changes) for additional context. | | [Factory-based signature of `ViewContainerRef.createComponent`](../api/core/viewcontainerref#createComponent) | [Type-based signature of `ViewContainerRef.createComponent`](../api/core/viewcontainerref#createComponent) | v13 | Angular no longer requires component factories to dynamically create components. Use different signature of the `[createComponent](../api/core/createcomponent)` method, which allows passing Component class directly. | | [`ComponentFactory`](../api/core/componentfactory) | Use non-factory based framework APIs. | v13 | Since Ivy, Component factories are not required. Angular provides other APIs where Component classes can be used directly. | | [`ComponentFactoryResolver`](../api/core/componentfactoryresolver) | Use non-factory based framework APIs. | v13 | Since Ivy, Component factories are not required, thus there is no need to resolve them. | | [`CompilerOptions.useJit and CompilerOptions.missingTranslation config options`](../api/core/compileroptions) | none | v13 | Since Ivy, those config options are unused, passing them has no effect. | | [`providedIn`](../api/core/injectable#providedIn) with NgModule | Prefer `'root'` providers, or use NgModule `providers` if scoping to an NgModule is necessary | v15 | none | | [`providedIn: 'any'`](../api/core/injectable#providedIn) | none | v15 | This option has confusing semantics and nearly zero usage. | ### @angular/core/testing | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`TestBed.get`](../api/core/testing/testbed#get) | [`TestBed.inject`](../api/core/testing/testbed#inject) | v9 | Same behavior, but type safe. | | [`async`](../api/core/testing/async) | [`waitForAsync`](../api/core/testing/waitforasync) | v10 | Same behavior, but rename to avoid confusion. | ### @angular/router | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`resolver` argument in `RouterOutletContract.activateWith`](../api/router/routeroutletcontract#activatewith) | No replacement needed | v14 | Component factories are not required to create an instance of a component dynamically. Passing a factory resolver via `resolver` argument is no longer needed. | | [`resolver` field of the `OutletContext` class](../api/router/outletcontext#resolver) | No replacement needed | v14 | Component factories are not required to create an instance of a component dynamically. Passing a factory resolver via `resolver` class field is no longer needed. | | [`RouterLinkWithHref` directive](../api/router/routerlinkwithhref) | Use `[RouterLink](../api/router/routerlink)` instead. | v15 | The `[RouterLinkWithHref](../api/router/routerlinkwithhref)` directive code was merged into `[RouterLink](../api/router/routerlink)`. Now the `[RouterLink](../api/router/routerlink)` directive can be used for all elements that have `[routerLink](../api/router/routerlink)` attribute. | | [`provideRoutes` function](../api/router/provideroutes) | Use `[ROUTES](../api/router/routes)` `[InjectionToken](../api/core/injectiontoken)` instead. | v15 | The `[provideRoutes](../api/router/provideroutes)` helper function is minimally useful and can be unintentionally used instead of `[provideRouter](../api/router/providerouter)` due to similar spelling. | | [`setupTestingRouter` function](../api/router/testing/setuptestingrouter) | Use `[provideRouter](../api/router/providerouter)` or `[RouterTestingModule](../api/router/testing/routertestingmodule)` instead. | v15.1 | The `[setupTestingRouter](../api/router/testing/setuptestingrouter)` function is not necessary. The `[Router](../api/router/router)` is initialized based on the DI configuration in tests as it would be in production. | ### @angular/platform-browser | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`BrowserTransferStateModule`](../api/platform-browser/browsertransferstatemodule) | No replacement needed. | v14.1 | The `[TransferState](../api/platform-browser/transferstate)` class is available for injection without importing additional modules on the client side of a server-rendered application. | ### @angular/platform-browser-dynamic | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`JitCompilerFactory`](../api/platform-browser-dynamic/jitcompilerfactory) | none | v13 | This symbol is no longer necessary. See [JIT API changes due to ViewEngine deprecation](deprecations#jit-api-changes) for additional context. | | [`RESOURCE_CACHE_PROVIDER`](../api/platform-browser-dynamic/resource_cache_provider) | none | v13 | This was previously necessary in some cases to test AOT-compiled components with View Engine, but is no longer since Ivy. | ### @angular/platform-server | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`renderModuleFactory`](../api/platform-server/rendermodulefactory) | [`renderModule`](../api/platform-server/rendermodule) | v13 | This symbol is no longer necessary. See [JIT API changes due to ViewEngine deprecation](deprecations#jit-api-changes) for additional context. | | [`ServerTransferStateModule`](../api/platform-server/servertransferstatemodule) | No replacement needed. | v14.1 | The `[TransferState](../api/platform-browser/transferstate)` class is available for injection without importing additional modules during server side rendering, when `[ServerModule](../api/platform-server/servermodule)` is imported or `[renderApplication](../api/platform-server/renderapplication)` function is used for bootstrap. | ### @angular/forms | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`ngModel` with reactive forms](deprecations#ngmodel-reactive) | [`FormControlDirective`](../api/forms/formcontroldirective) | v6 | none | | [`FormBuilder.group` legacy options parameter](../api/forms/formbuilder#group) | [`AbstractControlOptions` parameter value](../api/forms/abstractcontroloptions) | v11 | none | ### @angular/service-worker | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`SwUpdate#activated`](../api/service-worker/swupdate#activated) | [`SwUpdate#activateUpdate()` return value](../api/service-worker/swupdate#activateUpdate) | v13 | The return value of `[SwUpdate](../api/service-worker/swupdate)#activateUpdate()` indicates whether an update was successfully activated. | | [`SwUpdate#available`](../api/service-worker/swupdate#available) | [`SwUpdate#versionUpdates`](../api/service-worker/swupdate#versionUpdates) | v13 | The behavior of `[SwUpdate](../api/service-worker/swupdate)#available` can be rebuilt by filtering for `[VersionReadyEvent](../api/service-worker/versionreadyevent)` events on [`SwUpdate#versionUpdates`](../api/service-worker/swupdate#versionUpdates) | ### @angular/upgrade | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [All entry points](../api/upgrade) | [`@angular/upgrade/static`](../api/upgrade/static) | v5 | See [Upgrading from AngularJS](upgrade). | ### @angular/upgrade/static | API | Replacement | Deprecation announced | Details | | --- | --- | --- | --- | | [`getAngularLib`](../api/upgrade/static/getangularlib) | [`getAngularJSGlobal`](../api/upgrade/static/getangularjsglobal) | v5 | See [Upgrading from AngularJS](upgrade). | | [`setAngularLib`](../api/upgrade/static/setangularlib) | [`setAngularJSGlobal`](../api/upgrade/static/setangularjsglobal) | v5 | See [Upgrading from AngularJS](upgrade). | | [Factory-based signature of `downgradeModule`](../api/upgrade/static/downgrademodule) | [NgModule-based signature of `downgradeModule`](../api/upgrade/static/downgrademodule) | v13 | The `[downgradeModule](../api/upgrade/static/downgrademodule)` supports more ergonomic NgModule-based API (versus NgModule factory based API). | Deprecated features ------------------- This section lists all deprecated features, which includes template syntax, configuration options, and any other deprecations not listed in the [Deprecated APIs](deprecations#deprecated-apis) section. It also includes deprecated API usage scenarios or API combinations, to augment the information above. ### Bazel builder and schematics Bazel builder and schematics were introduced in Angular Labs to let users try out Bazel without having to manage Bazel version and BUILD files. This feature has been deprecated. For more information, please refer to the [migration doc](https://github.com/angular/angular/blob/main/packages/bazel/docs/BAZEL_SCHEMATICS.md). ### Web Tracing Framework integration Angular previously supported an integration with the [Web Tracing Framework (WTF)](https://google.github.io/tracing-framework) for performance testing of Angular applications. This integration has not been maintained and is now defunct. As a result, the integration was deprecated in Angular version 8, and due to no evidence of any existing usage, removed in version 9. ### `/deep/`, `>>>`, and `::ng-deep` component style selectors The shadow-dom-piercing descendant combinator is deprecated and support is being [removed from major browsers and tools](https://developers.google.com/web/updates/2017/10/remove-shadow-piercing). As such, in v4, Angular's support for `/deep/`, `>>>`, and `::ng-deep` was deprecated. Until removal, `::ng-deep` is preferred for broader compatibility with the tools. For more information, see [/deep/, >>>, and ::ng-deep](component-styles#deprecated-deep--and-ng-deep "Component Styles guide, Deprecated deep and ngdeep") in the Component Styles guide. ### `bind-`, `on-`, `bindon-`, and `ref-` prefixes The template prefixes `bind-`, `on-`, `bindon-`, and `ref-` have been deprecated in v13. Templates should use the more widely documented syntaxes for binding and references: * `[input]="value"` instead of `bind-input="value"` * `[@[trigger](../api/animations/trigger)]="value"` instead of `bind-animate-trigger="value"` * `(click)="onClick()"` instead of `on-click="onClick()"` * `[([ngModel](../api/forms/ngmodel))]="value"` instead of `bindon-ngModel="value"` * `#templateRef` instead of `ref-templateRef` ### `<template>` tag The `<template>` tag was deprecated in v4 to avoid colliding with a DOM element of the same name (such as when using web components). Use `[<ng-template>](../api/core/ng-template)` instead. For more information, see the [Ahead-of-Time Compilation](aot-compiler) guide. ### `[ngModel](../api/forms/ngmodel)` with reactive forms Support for using the `[ngModel](../api/forms/ngmodel)` input property and `ngModelChange` event with reactive form directives has been deprecated in Angular v6 and can be removed in a future version of Angular. Now deprecated: ``` <input [formControl]="control" [(ngModel)]="value"> ``` ``` this.value = 'some value'; ``` This support was deprecated for several reasons. First, developers found this pattern confusing. It seems like the actual `[ngModel](../api/forms/ngmodel)` directive is being used, but in fact it's an input/output property named `[ngModel](../api/forms/ngmodel)` on the reactive form directive that approximates some, but not all, of the directive's behavior. It allows getting and setting a value and intercepting value events, but some `[ngModel](../api/forms/ngmodel)` features, such as delaying updates with`ngModelOptions` or exporting the directive, don't work. In addition, this pattern mixes template-driven and reactive forms strategies, which prevents taking advantage of the full benefits of either strategy. Setting the value in the template violates the template-agnostic principles behind reactive forms, whereas adding a `[FormControl](../api/forms/formcontrol)`/`[FormGroup](../api/forms/formgroup)` layer in the class removes the convenience of defining forms in the template. To update your code before support is removed, decide whether to stick with reactive form directives (and get/set values using reactive forms patterns) or switch to template-driven directives. **After** (choice 1 - use reactive forms): ``` <input [formControl]="control"> ``` ``` this.control.setValue('some value'); ``` **After** (choice 2 - use template-driven forms): ``` <input [(ngModel)]="value"> ``` ``` this.value = 'some value'; ``` By default, when you use this pattern, you get a deprecation warning once in dev mode. You can choose to silence this warning by configuring `[ReactiveFormsModule](../api/forms/reactiveformsmodule)` at import time: ``` imports: [ ReactiveFormsModule.withConfig({warnOnNgModelWithFormControl: 'never'}) ], ``` Alternatively, you can choose to surface a separate warning for each instance of this pattern with a configuration value of `"always"`. This may help to track down where in the code the pattern is being used as the code is being updated. ### `[ReflectiveInjector](../api/core/reflectiveinjector)` In version 5, Angular replaced the `[ReflectiveInjector](../api/core/reflectiveinjector)` with the `StaticInjector`. The injector no longer requires the Reflect polyfill, reducing application size for most developers. **Before**: ``` ReflectiveInjector.resolveAndCreate(providers); ``` **After**: ``` Injector.create({providers}); ``` ### Public `[Router](../api/router/router)` properties None of the public properties of the `[Router](../api/router/router)` are meant to be writeable. They should all be configured using other methods, all of which have been documented. The following strategies are meant to be configured by registering the application strategy in DI via the `providers` in the root `[NgModule](../api/core/ngmodule)` or `[bootstrapApplication](../api/platform-browser/bootstrapapplication)`: * `routeReuseStrategy` * `titleStrategy` * `urlHandlingStrategy` The following options are meant to be configured using the options available in `RouterModule.forRoot` or `[provideRouter](../api/router/providerouter)` and `[withRouterConfig](../api/router/withrouterconfig)`. * `onSameUrlNavigation` * `paramsInheritanceStrategy` * `urlUpdateStrategy` * `canceledNavigationResolution` The following options are available in `RouterModule.forRoot` but not available in `[provideRouter](../api/router/providerouter)`: * `malformedUriErrorHandler` - This was not found to be used by anyone. There are currently no plans to make this available in `[provideRouter](../api/router/providerouter)`. * `errorHandler` - Developers should instead subscribe to `[Router.events](../api/router/router#events)` and filter for `[NavigationError](../api/router/navigationerror)`. ### `[CanLoad](../api/router/canload)` guards `[CanLoad](../api/router/canload)` guards in the Router are deprecated in favor of `[CanMatch](../api/router/canmatch)`. These guards execute at the same time in the lifecycle of a navigation. A `[CanMatch](../api/router/canmatch)` guard which returns false will prevent the `[Route](../api/router/route)` from being matched at all and also prevent loading the children of the `[Route](../api/router/route)`. `[CanMatch](../api/router/canmatch)` guards can accomplish the same goals as `[CanLoad](../api/router/canload)` but with the addition of allowing the navigation to match other routes when they reject (such as a wildcard route). There is no need to have both types of guards in the API surface. The `relativeLinkResolution` option is deprecated and being removed. In version 11, the default behavior was changed to the correct one. After `relativeLinkResolution` is removed, the correct behavior is always used without an option to use the broken behavior. A dev mode warning was added in v14 to warn if a created `[UrlTree](../api/router/urltree)` relies on the `relativeLinkResolution: 'legacy'` option. ### `loadChildren` string syntax When Angular first introduced lazy routes, there wasn't browser support for dynamically loading additional JavaScript. Angular created its own scheme using the syntax `loadChildren: './lazy/lazy.module#LazyModule'` and built tooling to support it. Now that ECMAScript dynamic import is supported in many browsers, Angular is moving toward this new syntax. In version 8, the string syntax for the [`loadChildren`](../api/router/loadchildren) route specification was deprecated, in favor of new syntax that uses `import()` syntax. **Before**: ``` const routes: Routes = [{ path: 'lazy', // The following string syntax for loadChildren is deprecated loadChildren: './lazy/lazy.module#LazyModule', }]; ``` **After**: ``` const routes: Routes = [{ path: 'lazy', // The new import() syntax loadChildren: () => import('./lazy/lazy.module').then(m => m.LazyModule) }]; ``` > **Version 8 update**: When you update to version 8, the [`ng update`](cli/update) command performs the transformation automatically. Prior to version 7, the `import()` syntax only works in JIT mode (with view engine). > > > **Declaration syntax**: It's important to follow the route declaration syntax `loadChildren: () => import('...').then(m => m.ModuleName)` to allow `ngc` to discover the lazy-loaded module and the associated `[NgModule](../api/core/ngmodule)`. You can find the complete list of allowed syntax constructs [here](https://github.com/angular/angular-cli/blob/a491b09800b493fe01301387fa9a025f7c7d4808/packages/ngtools/webpack/src/transformers/import_factory.ts#L104-L113). These restrictions will be relaxed with the release of Ivy since it'll no longer use `NgFactories`. > > ### Dependency on a reflect-metadata polyfill in JIT mode Angular applications, and specifically applications that relied on the JIT compiler, used to require a polyfill for the [reflect-metadata](https://github.com/rbuckton/reflect-metadata) APIs. The need for this polyfill was removed in Angular version 8.0 ([see #14473](https://github.com/angular/angular-cli/pull/14473)), rendering the presence of the polyfill in most Angular applications unnecessary. Because the polyfill can be depended on by third-party libraries, instead of removing it from all Angular projects, we are deprecating the requirement for this polyfill as of version 8.0. This should give library authors and application developers sufficient time to evaluate if they need the polyfill, and perform any refactoring necessary to remove the dependency on it. In a typical Angular project, the polyfill is not used in production builds, so removing it should not impact production applications. The goal behind this removal is overall simplification of the build setup and decrease in the number of external dependencies. ### `@[ViewChild](../api/core/viewchild)()` / `@[ContentChild](../api/core/contentchild)()` static resolution as the default See the [dedicated migration guide for static queries](static-query-migration). ### `@[ContentChild](../api/core/contentchild)()` / `@[Input](../api/core/input)()` used together The following pattern is deprecated: ``` @Input() @ContentChild(TemplateRef) tpldeprecated !: TemplateRef<any>; ``` Rather than using this pattern, separate the two decorators into their own properties and add fallback logic as in the following example: ``` @Input() tpl !: TemplateRef<any>; @ContentChild(TemplateRef) inlineTemplate !: TemplateRef<any>; ``` ### Cannot assign to template variables In the following example, the two-way binding means that `optionName` should be written when the `valueChange` event fires. ``` <option *ngFor="let optionName of options" [(value)]="optionName"></option> ``` However, in practice, Angular ignores two-way bindings to template variables. Starting in version 8, attempting to write to template variables is deprecated. In a future version, we will throw to indicate that the write is not supported. ``` <option *ngFor="let optionName of options" [value]="optionName"></option> ``` ### Binding to `innerText` in `platform-server` [Domino](https://github.com/fgnass/domino), which is used in server-side rendering, doesn't support `innerText`, so in platform-server's *domino adapter*, there was special code to fall back to `textContent` if you tried to bind to `innerText`. These two properties have subtle differences, so switching to `textContent` under the hood can be surprising to users. For this reason, we are deprecating this behavior. Going forward, users should explicitly bind to `textContent` when using Domino. ### `wtfStartTimeRange` and all `wtf*` APIs All of the `wtf*` APIs are deprecated and will be removed in a future version. ### `entryComponents` and `[ANALYZE\_FOR\_ENTRY\_COMPONENTS](../api/core/analyze_for_entry_components)` no longer required Previously, the `entryComponents` array in the `[NgModule](../api/core/ngmodule)` definition was used to tell the compiler which components would be created and inserted dynamically. With Ivy, this isn't a requirement anymore and the `entryComponents` array can be removed from existing module declarations. The same applies to the `[ANALYZE\_FOR\_ENTRY\_COMPONENTS](../api/core/analyze_for_entry_components)` injection token. > **NOTE**: You may still need to keep these if building a library that will be consumed by a View Engine application. > > ### `[ModuleWithProviders](../api/core/modulewithproviders)` type without a generic Some Angular libraries, such as `@angular/router` and `@ngrx/store`, implement APIs that return a type called `[ModuleWithProviders](../api/core/modulewithproviders)` (typically using a method named `forRoot()`). This type represents an `[NgModule](../api/core/ngmodule)` along with additional providers. Angular version 9 deprecates use of `[ModuleWithProviders](../api/core/modulewithproviders)` without an explicitly generic type, where the generic type refers to the type of the `[NgModule](../api/core/ngmodule)`. In a future version of Angular, the generic will no longer be optional. If you're using the CLI, `ng update` should [migrate your code automatically](migration-module-with-providers). If you're not using the CLI, you can add any missing generic types to your application manually. For example: **Before**: ``` @NgModule({ /* ... */ }) export class MyModule { static forRoot(config: SomeConfig): ModuleWithProviders { return { ngModule: SomeModule, providers: [ {provide: SomeConfig, useValue: config} ] }; } } ``` **After**: ``` @NgModule({ /* ... */ }) export class MyModule { static forRoot(config: SomeConfig): ModuleWithProviders<SomeModule> { return { ngModule: SomeModule, providers: [ {provide: SomeConfig, useValue: config} ] }; } } ``` ### Input setter coercion Since the `strictTemplates` flag has been introduced in Angular, the compiler has been able to type-check input bindings to the declared input type of the corresponding directive. When a getter/setter pair is used for the input, the setter might need to accept more types than the getter returns, such as when the setter first converts the input value. However, until TypeScript 4.3 a getter/setter pair was required to have identical types so this pattern could not be accurately declared. To mitigate this limitation, it was made possible to declare [input setter coercion fields](template-typecheck#input-setter-coercion) in directives that are used when type-checking input bindings. However, since [TypeScript 4.3](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-3.html#separate-write-types-on-properties) the limitation has been removed; setters can now accept a wider type than what is returned by the getter. This means that input coercion fields are no longer needed, as their effects can be achieved by widening the type of the setter. For example, the following directive: ``` @Component({ }) export class SubmitButtonComponent { static ngAcceptInputType_disabled: boolean|''; private disabledValue = false; @Input() get disabled(): boolean { return this.disabledValue; } set disabled(value: boolean) { this.disabledValue = (value === '') || value; } } ``` can be refactored as follows: ``` @Component({ }) export class SubmitButtonComponent { private disabledValue = false; @Input() get disabled(): boolean { return this.disabledValue; } set disabled(value: boolean|'') { this.disabledValue = (value === '') || value; } } ``` ### `fullTemplateTypeCheck` When compiling your application using the AOT compiler, your templates are type-checked according to a certain strictness level. Before Angular 9 there existed only two strictness levels of template type checking as determined by [the `fullTemplateTypeCheck` compiler option](angular-compiler-options). In version 9 the `strictTemplates` family of compiler options has been introduced as a more fine-grained approach to configuring how strict your templates are being type-checked. The `fullTemplateTypeCheck` flag is being deprecated in favor of the new `strictTemplates` option and its related compiler options. Projects that currently have `fullTemplateTypeCheck: true` configured can migrate to the following set of compiler options to achieve the same level of type-checking: ``` { "angularCompilerOptions": { … "strictTemplates": true, "strictInputTypes": false, "strictNullInputTypes": false, "strictAttributeTypes": false, "strictOutputEventTypes": false, "strictDomEventTypes": false, "strictDomLocalRefTypes": false, "strictSafeNavigationTypes": false, "strictContextGenerics": false, … } } ``` JIT API changes due to ViewEngine deprecation --------------------------------------------- In ViewEngine, [JIT compilation](glossary#jit) required special providers (such as `[Compiler](../api/core/compiler)` or `[CompilerFactory](../api/core/compilerfactory)`) to be injected in the app and corresponding methods to be invoked. With Ivy, JIT compilation takes place implicitly if the Component, NgModule, etc. have not already been [AOT compiled](glossary#aot). Those special providers were made available in Ivy for backwards-compatibility with ViewEngine to make the transition to Ivy smoother. Since ViewEngine is deprecated and will soon be removed, those symbols are now deprecated as well. > **IMPORTANT**: this deprecation doesn't affect JIT mode in Ivy (JIT remains available with Ivy, however we are exploring a possibility of deprecating it in the future. See [RFC: Exploration of use-cases for Angular JIT compilation mode](https://github.com/angular/angular/issues/43133)). > > ### `[TestRequest](../api/common/http/testing/testrequest)` accepting `ErrorEvent` Angular provides utilities for testing `[HttpClient](../api/common/http/httpclient)`. The `[TestRequest](../api/common/http/testing/testrequest)` class from `@angular/common/[http](../api/common/http)/testing` mocks HTTP request objects for use with `[HttpTestingController](../api/common/http/testing/httptestingcontroller)`. `[TestRequest](../api/common/http/testing/testrequest)` provides an API for simulating an HTTP response with an error. In earlier versions of Angular, this API accepted objects of type `ErrorEvent`, which does not match the type of error event that browsers return natively. If you use `ErrorEvent` with `[TestRequest](../api/common/http/testing/testrequest)`, you should switch to `ProgressEvent`. Here is an example using a `ProgressEvent`: ``` const mockError = new ProgressEvent('error'); const mockRequest = httpTestingController.expectOne(..); mockRequest.error(mockError); ``` Deprecated CLI APIs and Options ------------------------------- This section contains a complete list all of the currently deprecated CLI flags. ### @angular-devkit/build-angular | API/Option | May be removed in | Details | | --- | --- | --- | | `deployUrl` | v15 | Use `baseHref` option, `[APP\_BASE\_HREF](../api/common/app_base_href)` DI token or a combination of both instead. For more information, see [the deploy url](deployment#the-deploy-url). | | Protractor builder | v14 | Deprecate as part of the Protractor deprecation. | Removed APIs ------------ The following APIs have been removed starting with version 11.0.0\*: | Package | API | Replacement | Details | | --- | --- | --- | --- | | `@angular/router` | `preserveQueryParams` | [`queryParamsHandling`](../api/router/urlcreationoptions#queryParamsHandling) | | \* To see APIs removed in version 10, check out this guide on the [version 10 docs site](https://v10.angular.io/guide/deprecations#removed). ### Style Sanitization for `[[style](../api/animations/style)]` and `[style.prop]` bindings Angular used to sanitize `[[style](../api/animations/style)]` and `[style.prop]` bindings to prevent malicious code from being inserted through `javascript:` expressions in CSS `url()` entries. However, most modern browsers no longer support the usage of these expressions, so sanitization was only maintained for the sake of IE 6 and 7. Given that Angular does not support either IE 6 or 7 and sanitization has a performance cost, we will no longer sanitize style bindings as of version 10 of Angular. ### `loadChildren` string syntax in `@angular/router` It is no longer possible to use the `loadChildren` string syntax to configure lazy routes. The string syntax has been replaced with dynamic import statements. The `DeprecatedLoadChildren` type was removed from `@angular/router`. Find more information about the replacement in the [`LoadChildrenCallback` documentation](../api/router/loadchildrencallback). The supporting classes `NgModuleFactoryLoader`, `SystemJsNgModuleLoader`, and `SystemJsNgModuleLoaderConfig` were removed from `@angular/core`, as well as `SpyNgModuleFactoryLoader` from `@angular/router`. ### `WrappedValue` The purpose of `WrappedValue` was to allow the same object instance to be treated as different for the purposes of change detection. It was commonly used with the `[async](../api/common/asyncpipe)` pipe in the case where the `Observable` produces the same instance of the value. Given that this use case is relatively rare and special handling impacted application performance, the `WrappedValue` API has been removed in Angular 13. If you rely on the behavior that the same object instance should cause change detection, you have two options: * Clone the resulting value so that it has a new identity * Explicitly call [`ChangeDetectorRef.detectChanges()`](../api/core/changedetectorref#detectchanges) to force the update Last reviewed on Mon Feb 28 2022
programming_docs
angular Example Angular Internationalization application Example Angular Internationalization application ================================================ Explore the translated example application ------------------------------------------ > To explore the sample application with French translations used in the [Angular Internationalization](i18n-overview "Angular Internationalization | Angular") guide, see . > > `fr-CA` and `en-US` example ---------------------------- The following tabs display the example application and the associated translation files. ``` <h1 i18n="User welcome|An introduction header for this sample@@introductionHeader"> Hello i18n! </h1> <ng-container i18n>I don't output any element</ng-container> <br /> <img [src]="logo" i18n-title title="Angular logo" alt="Angular logo"/> <br> <button type="button" (click)="inc(1)">+</button> <button type="button" (click)="inc(-1)">-</button> <span i18n>Updated {minutes, plural, =0 {just now} =1 {one minute ago} other {{{minutes}} minutes ago}}</span> ({{minutes}}) <br><br> <button type="button" (click)="male()">&#9794;</button> <button type="button" (click)="female()">&#9792;</button> <button type="button" (click)="other()">&#9895;</button> <span i18n>The author is {gender, select, male {male} female {female} other {other}}</span> <br><br> <span i18n>Updated: {minutes, plural, =0 {just now} =1 {one minute ago} other {{{minutes}} minutes ago by {gender, select, male {male} female {female} other {other}}}} </span> ``` ``` import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent { minutes = 0; gender = 'female'; fly = true; logo = 'https://angular.io/assets/images/logos/angular/angular.png'; inc(i: number) { this.minutes = Math.min(5, Math.max(0, this.minutes + i)); } male() { this.gender = 'male'; } female() { this.gender = 'female'; } other() { this.gender = 'other'; } } ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` import { enableProdMode } from '@angular/core'; import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; import { environment } from './environments/environment'; if (environment.production) { enableProdMode(); } platformBrowserDynamic().bootstrapModule(AppModule); ``` ``` <?xml version="1.0" encoding="UTF-8" ?> <xliff version="1.2" xmlns="urn:oasis:names:tc:xliff:document:1.2"> <file source-language="en" datatype="plaintext" original="ng2.template"> <body> <trans-unit id="introductionHeader" datatype="html"> <source>Hello i18n!</source> <note priority="1" from="description">An introduction header for this sample</note> <note priority="1" from="meaning">User welcome</note> </trans-unit> <trans-unit id="introductionHeader" datatype="html"> <source>Hello i18n!</source> <target>Bonjour i18n !</target> <note priority="1" from="description">An introduction header for this sample</note> <note priority="1" from="meaning">User welcome</note> </trans-unit> <trans-unit id="ba0cc104d3d69bf669f97b8d96a4c5d8d9559aa3" datatype="html"> <source>I don&apos;t output any element</source> <target>Je n'affiche aucun élément</target> </trans-unit> <trans-unit id="701174153757adf13e7c24a248c8a873ac9f5193" datatype="html"> <source>Angular logo</source> <target>Logo d'Angular</target> </trans-unit> <trans-unit id="5a134dee893586d02bffc9611056b9cadf9abfad" datatype="html"> <source>{VAR_PLURAL, plural, =0 {just now} =1 {one minute ago} other {<x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes ago} }</source> <target>{VAR_PLURAL, plural, =0 {à l'instant} =1 {il y a une minute} other {il y a <x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes} }</target> </trans-unit> <trans-unit id="f99f34ac9bd4606345071bd813858dec29f3b7d1" datatype="html"> <source>The author is <x id="ICU" equiv-text="{gender, select, male {...} female {...} other {...}}"/></source> <target>L'auteur est <x id="ICU" equiv-text="{gender, select, male {...} female {...} other {...}}"/></target> </trans-unit> <trans-unit id="eff74b75ab7364b6fa888f1cbfae901aaaf02295" datatype="html"> <source>{VAR_SELECT, select, male {male} female {female} other {other} }</source> <target>{VAR_SELECT, select, male {un homme} female {une femme} other {autre} }</target> </trans-unit> <trans-unit id="972cb0cf3e442f7b1c00d7dab168ac08d6bdf20c" datatype="html"> <source>Updated: <x id="ICU" equiv-text="{minutes, plural, =0 {...} =1 {...} other {...}}"/></source> <target>Mis à jour: <x id="ICU" equiv-text="{minutes, plural, =0 {...} =1 {...} other {...}}"/></target> </trans-unit> <trans-unit id="7151c2e67748b726f0864fc443861d45df21d706" datatype="html"> <source>{VAR_PLURAL, plural, =0 {just now} =1 {one minute ago} other {<x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes ago by {VAR_SELECT, select, male {male} female {female} other {other} }} }</source> <target>{VAR_PLURAL, plural, =0 {à l'instant} =1 {il y a une minute} other {il y a <x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes par {VAR_SELECT, select, male {un homme} female {une femme} other {autre} }} }</target> </trans-unit> <trans-unit id="myId" datatype="html"> <source>Hello</source> <target state="new">Bonjour</target> </trans-unit> </body> </file> </xliff> ``` Last reviewed on Mon Feb 28 2022 angular Typed Forms Typed Forms =========== As of Angular 14, reactive forms are strictly typed by default. Prerequisites ------------- As background for this guide, you should already be familiar with [Angular Reactive Forms](reactive-forms "Reactive Forms"). Overview of Typed Forms ----------------------- With Angular reactive forms, you explicitly specify a *form model*. As a simple example, consider this basic user login form: ``` const login = new FormGroup({ email: new FormControl(''), password: new FormControl(''), }); ``` Angular provides many APIs for interacting with this `[FormGroup](../api/forms/formgroup)`. For example, you may call `login.value`, `login.controls`, `login.patchValue`, etc. (For a full API reference, see the [API documentation](../api/forms/formgroup).) In previous Angular versions, most of these APIs included `any` somewhere in their types, and interacting with the structure of the controls, or the values themselves, was not type-safe. For example: you could write the following invalid code: ``` const emailDomain = login.value.email.domain; ``` With strictly typed reactive forms, the above code does not compile, because there is no `domain` property on `[email](../api/forms/emailvalidator)`. In addition to the added safety, the types enable a variety of other improvements, such as better autocomplete in IDEs, and an explicit way to specify form structure. These improvements currently apply only to *reactive* forms (not [*template-driven* forms](forms "Forms Guide")). Automated Untyped Forms Migration --------------------------------- When upgrading to Angular 14, an included migration will automatically replace all the forms classes in your code with corresponding untyped versions. For example, the snippet from above would become: ``` const login = new UntypedFormGroup({ email: new UntypedFormControl(''), password: new UntypedFormControl(''), }); ``` Each `Untyped` symbol has exactly the same semantics as in previous Angular versions, so your application should continue to compile as before. By removing the `Untyped` prefixes, you can incrementally enable the types. `[FormControl](../api/forms/formcontrol)`: Getting Started ----------------------------------------------------------- The simplest possible form consists of a single control: ``` const email = new FormControl('[email protected]'); ``` This control will be automatically inferred to have the type `[FormControl](../api/forms/formcontrol)<string|null>`. TypeScript will automatically enforce this type throughout the [`FormControl` API](../api/forms/formcontrol), such as `email.value`, `email.valueChanges`, `email.setValue(...)`, etc. ### Nullability You might wonder: why does the type of this control include `null`? This is because the control can become `null` at any time, by calling reset: ``` const email = new FormControl('[email protected]'); email.reset(); console.log(email.value); // null ``` TypeScript will enforce that you always handle the possibility that the control has become `null`. If you want to make this control non-nullable, you may use the `nonNullable` option. This will cause the control to reset to its initial value, instead of `null`: ``` const email = new FormControl('[email protected]', {nonNullable: true}); email.reset(); console.log(email.value); // [email protected] ``` To reiterate, this option affects the runtime behavior of your form when `.reset()` is called, and should be flipped with care. ### Specifying an Explicit Type It is possible to specify the type, instead of relying on inference. Consider a control that is initialized to `null`. Because the initial value is `null`, TypeScript will infer `[FormControl](../api/forms/formcontrol)<null>`, which is narrower than we want. ``` const email = new FormControl(null); email.setValue('[email protected]'); // Error! ``` To prevent this, we explicitly specify the type as `string|null`: ``` const email = new FormControl<string|null>(null); email.setValue('[email protected]'); ``` `[FormArray](../api/forms/formarray)`: Dynamic, Homogenous Collections ----------------------------------------------------------------------- A `[FormArray](../api/forms/formarray)` contains an open-ended list of controls. The type parameter corresponds to the type of each inner control: ``` const names = new FormArray([new FormControl('Alex')]); names.push(new FormControl('Jess')); ``` This `[FormArray](../api/forms/formarray)` will have the inner controls type `[FormControl](../api/forms/formcontrol)<string|null>`. If you want to have multiple different element types inside the array, you must use `[UntypedFormArray](../api/forms/untypedformarray)`, because TypeScript cannot infer which element type will occur at which position. `[FormGroup](../api/forms/formgroup)` and `[FormRecord](../api/forms/formrecord)` ---------------------------------------------------------------------------------- Angular provides the `[FormGroup](../api/forms/formgroup)` type for forms with an enumerated set of keys, and a type called `[FormRecord](../api/forms/formrecord)`, for open-ended or dynamic groups. ### Partial Values Consider again a login form: ``` const login = new FormGroup({ email: new FormControl('', {nonNullable: true}), password: new FormControl('', {nonNullable: true}), }); ``` On any `[FormGroup](../api/forms/formgroup)`, it is [possible to disable controls](../api/forms/formgroup). Any disabled control will not appear in the group's value. As a consequence, the type of `login.value` is `Partial<{[email](../api/forms/emailvalidator): string, password: string}>`. The `Partial` in this type means that each member might be undefined. More specifically, the type of `login.value.email` is `string|undefined`, and TypeScript will enforce that you handle the possibly `undefined` value (if you have `strictNullChecks` enabled). If you want to access the value *including* disabled controls, and thus bypass possible `undefined` fields, you can use `login.getRawValue()`. ### Optional Controls and Dynamic Groups Some forms have controls that may or may not be present, which can be added and removed at runtime. You can represent these controls using *optional fields*: ``` interface LoginForm { email: FormControl<string>; password?: FormControl<string>; } const login = new FormGroup<LoginForm>({ email: new FormControl('', {nonNullable: true}), password: new FormControl('', {nonNullable: true}), }); login.removeControl('password'); ``` In this form, we explicitly specify the type, which allows us to make the `password` control optional. TypeScript will enforce that only optional controls can be added or removed. ### `[FormRecord](../api/forms/formrecord)` Some `[FormGroup](../api/forms/formgroup)` usages do not fit the above pattern because the keys are not known ahead of time. The `[FormRecord](../api/forms/formrecord)` class is designed for that case: ``` const addresses = new FormRecord<FormControl<string|null>>({}); addresses.addControl('Andrew', new FormControl('2340 Folsom St')); ``` Any control of type `string|null` can be added to this `[FormRecord](../api/forms/formrecord)`. If you need a `[FormGroup](../api/forms/formgroup)` that is both dynamic (open-ended) and heterogeneous (the controls are different types), no improved type safety is possible, and you should use `[UntypedFormGroup](../api/forms/untypedformgroup)`. A `[FormRecord](../api/forms/formrecord)` can also be built with the `[FormBuilder](../api/forms/formbuilder)`: ``` const addresses = fb.record({'Andrew': '2340 Folsom St'}); ``` `[FormBuilder](../api/forms/formbuilder)` and `[NonNullableFormBuilder](../api/forms/nonnullableformbuilder)` -------------------------------------------------------------------------------------------------------------- The `[FormBuilder](../api/forms/formbuilder)` class has been upgraded to support the new types as well, in the same manner as the above examples. Additionally, an additional builder is available: `[NonNullableFormBuilder](../api/forms/nonnullableformbuilder)`. This type is shorthand for specifying `{nonNullable: true}` on every control, and can eliminate significant boilerplate from large non-nullable forms. You can access it using the `nonNullable` property on a `[FormBuilder](../api/forms/formbuilder)`: ``` const fb = new FormBuilder(); const login = fb.nonNullable.group({ email: '', password: '', }); ``` On the above example, both inner controls will be non-nullable (i.e. `nonNullable` will be set). You can also inject it using the name `[NonNullableFormBuilder](../api/forms/nonnullableformbuilder)`. Last reviewed on Tue May 10 2022 angular Background processing using web workers Background processing using web workers ======================================= [Web workers](https://developer.mozilla.org/docs/Web/API/Web_Workers_API) lets you run CPU-intensive computations in a background thread, freeing the main thread to update the user interface. Application's performing a lot of computations, like generating Computer-Aided Design (CAD) drawings or doing heavy geometric calculations, can use web workers to increase performance. > The Angular CLI does not support running itself in a web worker. > > Adding a web worker ------------------- To add a web worker to an existing project, use the Angular CLI `ng generate` command. ``` ng generate web-worker <location> ``` You can add a web worker anywhere in your application. For example, to add a web worker to the root component, `src/app/app.component.ts`, run the following command. ``` ng generate web-worker app ``` The command performs the following actions. 1. Configures your project to use web workers, if it isn't already. 2. Adds the following scaffold code to `src/app/app.worker.ts` to receive messages. ``` addEventListener('message', ({ data }) => { const response = `worker response to ${data}`; postMessage(response); }); ``` 3. Adds the following scaffold code to `src/app/app.component.ts` to use the worker. ``` if (typeof Worker !== 'undefined') { // Create a new const worker = new Worker(new URL('./app.worker', import.meta.url)); worker.onmessage = ({ data }) => { console.log(`page got message: ${data}`); }; worker.postMessage('hello'); } else { // Web workers are not supported in this environment. // You should add a fallback so that your program still executes correctly. } ``` After you create this initial scaffold, you must refactor your code to use the web worker by sending messages to and from the worker. > Some environments or platforms, such as `@angular/platform-server` used in [Server-side Rendering](universal), don't support web workers. To ensure that your application works in these environments, you must provide a fallback mechanism to perform the computations that the worker would otherwise perform. > > Last reviewed on Mon Feb 28 2022 angular Structural directives Structural directives ===================== This guide is about structural directives and provides conceptual information on how such directives work, how Angular interprets their shorthand syntax, and how to add template guard properties to catch template type errors. Structural directives are directives which change the DOM layout by adding and removing DOM elements. Angular provides a set of built-in structural directives (such as `[NgIf](../api/common/ngif)`, `[NgForOf](../api/common/ngforof)`, `[NgSwitch](../api/common/ngswitch)` and others) which are commonly used in all Angular projects. For more information see [Built-in directives](built-in-directives). > For the example application that this page describes, see the . > > Structural directive shorthand ------------------------------ When structural directives are applied they generally are prefixed by an asterisk, `*`, such as `*[ngIf](../api/common/ngif)`. This convention is shorthand that Angular interprets and converts into a longer form. Angular transforms the asterisk in front of a structural directive into an `[<ng-template>](../api/core/ng-template)` that surrounds the host element and its descendants. For example, let's take the following code which uses an `*[ngIf](../api/common/ngif)` to displays the hero's name if `hero` exists: ``` <div *ngIf="hero" class="name">{{hero.name}}</div> ``` Angular creates an `[<ng-template>](../api/core/ng-template)` element and applies the `*[ngIf](../api/common/ngif)` directive onto it where it becomes a property binding in square brackets, `[[ngIf](../api/common/ngif)]`. The rest of the `<div>`, including its class attribute, is then moved inside the `[<ng-template>](../api/core/ng-template)`: ``` <ng-template [ngIf]="hero"> <div class="name">{{hero.name}}</div> </ng-template> ``` Note that Angular does not actually create a real `[<ng-template>](../api/core/ng-template)` element, but instead only renders the `<div>` element. ``` <div _ngcontent-c0>Mr. Nice</div> ``` The following example compares the shorthand use of the asterisk in `*[ngFor](../api/common/ngfor)` with the longhand `[<ng-template>](../api/core/ng-template)` form: ``` <div *ngFor="let hero of heroes; let i=index; let odd=odd; trackBy: trackById" [class.odd]="odd"> ({{i}}) {{hero.name}} </div> <ng-template ngFor let-hero [ngForOf]="heroes" let-i="index" let-odd="odd" [ngForTrackBy]="trackById"> <div [class.odd]="odd"> ({{i}}) {{hero.name}} </div> </ng-template> ``` Here, everything related to the `[ngFor](../api/common/ngfor)` structural directive is moved to the `[<ng-template>](../api/core/ng-template)`. All other bindings and attributes on the element apply to the `<div>` element within the `[<ng-template>](../api/core/ng-template)`. Other modifiers on the host element, in addition to the `[ngFor](../api/common/ngfor)` string, remain in place as the element moves inside the `[<ng-template>](../api/core/ng-template)`. In this example, the `[class.odd]="odd"` stays on the `<div>`. The `let` keyword declares a template input variable that you can reference within the template. The input variables in this example are `hero`, `i`, and `odd`. The parser translates `let hero`, `let i`, and `let odd` into variables named `let-hero`, `let-i`, and `let-odd`. The `let-i` and `let-odd` variables become `let i=index` and `let odd=odd`. Angular sets `i` and `odd` to the current value of the context's `index` and `odd` properties. The parser applies PascalCase to all directives and prefixes them with the directive's attribute name, such as ngFor. For example, the `[ngFor](../api/common/ngfor)` input properties, `of` and `trackBy`, map to `[ngForOf](../api/common/ngforof)` and `ngForTrackBy`. As the `[NgFor](../api/common/ngfor)` directive loops through the list, it sets and resets properties of its own context object. These properties can include, but aren't limited to, `index`, `odd`, and a special property named `$implicit`. Angular sets `let-hero` to the value of the context's `$implicit` property, which `[NgFor](../api/common/ngfor)` has initialized with the hero for the current iteration. For more information, see the [NgFor API](../api/common/ngforof "API: NgFor") and [NgForOf API](../api/common/ngforof) documentation. > Note that Angular's `[<ng-template>](../api/core/ng-template)` element defines a template that doesn't render anything by default, if you just wrap elements in an `[<ng-template>](../api/core/ng-template)` without applying a structural directive those elements will not be rendered. > > For more information, see the [ng-template API](../api/core/ng-template) documentation. > > One structural directive per element ------------------------------------ It's a quite common use-case to repeat a block of HTML but only when a particular condition is true. An intuitive way to do that is to put both an `*[ngFor](../api/common/ngfor)` and an `*[ngIf](../api/common/ngif)` on the same element. However, since both `*[ngFor](../api/common/ngfor)` and `*[ngIf](../api/common/ngif)` are structural directives, this would be treated as an error by the compiler. You may apply only one *structural* directive to an element. The reason is simplicity. Structural directives can do complex things with the host element and its descendants. When two directives lay claim to the same host element, which one should take precedence? Which should go first, the `[NgIf](../api/common/ngif)` or the `[NgFor](../api/common/ngfor)`? Can the `[NgIf](../api/common/ngif)` cancel the effect of the `[NgFor](../api/common/ngfor)`? If so (and it seems like it should be so), how should Angular generalize the ability to cancel for other structural directives? There are no easy answers to these questions. Prohibiting multiple structural directives makes them moot. There's an easy solution for this use case: put the `*[ngIf](../api/common/ngif)` on a container element that wraps the `*[ngFor](../api/common/ngfor)` element. One or both elements can be an `[<ng-container>](../api/core/ng-container)` so that no extra DOM elements are generated. Creating a structural directive ------------------------------- This section guides you through creating an `UnlessDirective` and how to set `condition` values. The `UnlessDirective` does the opposite of `[NgIf](../api/common/ngif)`, and `condition` values can be set to `true` or `false`. `[NgIf](../api/common/ngif)` displays the template content when the condition is `true`. `UnlessDirective` displays the content when the condition is `false`. Following is the `UnlessDirective` selector, `appUnless`, applied to the paragraph element. When `condition` is `false`, the browser displays the sentence. ``` <p *appUnless="condition">Show this sentence unless the condition is true.</p> ``` 1. Using the Angular CLI, run the following command, where `unless` is the name of the directive: ``` ng generate directive unless ``` Angular creates the directive class and specifies the CSS selector, `appUnless`, that identifies the directive in a template. 2. Import `[Input](../api/core/input)`, `[TemplateRef](../api/core/templateref)`, and `[ViewContainerRef](../api/core/viewcontainerref)`. ``` import { Directive, Input, TemplateRef, ViewContainerRef } from '@angular/core'; @Directive({ selector: '[appUnless]'}) export class UnlessDirective { } ``` 3. Inject `[TemplateRef](../api/core/templateref)` and `[ViewContainerRef](../api/core/viewcontainerref)` in the directive constructor as private variables. ``` constructor( private templateRef: TemplateRef<any>, private viewContainer: ViewContainerRef) { } ``` The `UnlessDirective` creates an [embedded view](../api/core/embeddedviewref "API: EmbeddedViewRef") from the Angular-generated `[<ng-template>](../api/core/ng-template)` and inserts that view in a [view container](../api/core/viewcontainerref "API: ViewContainerRef") adjacent to the directive's original `<p>` host element. [`TemplateRef`](../api/core/templateref "API: TemplateRef") helps you get to the `[<ng-template>](../api/core/ng-template)` contents and [`ViewContainerRef`](../api/core/viewcontainerref "API: ViewContainerRef") accesses the view container. 4. Add an `appUnless` `@[Input](../api/core/input)()` property with a setter. ``` @Input() set appUnless(condition: boolean) { if (!condition && !this.hasView) { this.viewContainer.createEmbeddedView(this.templateRef); this.hasView = true; } else if (condition && this.hasView) { this.viewContainer.clear(); this.hasView = false; } } ``` Angular sets the `appUnless` property whenever the value of the condition changes. * If the condition is falsy and Angular hasn't created the view previously, the setter causes the view container to create the embedded view from the template * If the condition is truthy and the view is currently displayed, the setter clears the container, which disposes of the view The complete directive is as follows: ``` import { Directive, Input, TemplateRef, ViewContainerRef } from '@angular/core'; /** * Add the template content to the DOM unless the condition is true. */ @Directive({ selector: '[appUnless]'}) export class UnlessDirective { private hasView = false; constructor( private templateRef: TemplateRef<any>, private viewContainer: ViewContainerRef) { } @Input() set appUnless(condition: boolean) { if (!condition && !this.hasView) { this.viewContainer.createEmbeddedView(this.templateRef); this.hasView = true; } else if (condition && this.hasView) { this.viewContainer.clear(); this.hasView = false; } } } ``` ### Testing the directive In this section, you'll update your application to test the `UnlessDirective`. 1. Add a `condition` set to `false` in the `AppComponent`. ``` condition = false; ``` 2. Update the template to use the directive. Here, `*appUnless` is on two `<p>` tags with opposite `condition` values, one `true` and one `false`. ``` <p *appUnless="condition" class="unless a"> (A) This paragraph is displayed because the condition is false. </p> <p *appUnless="!condition" class="unless b"> (B) Although the condition is true, this paragraph is displayed because appUnless is set to false. </p> ``` The asterisk is shorthand that marks `appUnless` as a structural directive. When the `condition` is falsy, the top (A) paragraph appears and the bottom (B) paragraph disappears. When the `condition` is truthy, the top (A) paragraph disappears and the bottom (B) paragraph appears. 3. To change and display the value of `condition` in the browser, add markup that displays the status and a button. ``` <p> The condition is currently <span [ngClass]="{ 'a': !condition, 'b': condition, 'unless': true }">{{condition}}</span>. <button type="button" (click)="condition = !condition" [ngClass] = "{ 'a': condition, 'b': !condition }" > Toggle condition to {{condition ? 'false' : 'true'}} </button> </p> ``` To verify that the directive works, click the button to change the value of `condition`. Structural directive syntax reference ------------------------------------- When you write your own structural directives, use the following syntax: ``` *:prefix="( :let | :expression ) (';' | ',')? ( :let | :as | :keyExp )*" ``` The following tables describe each portion of the structural directive grammar: ``` as = :export "as" :local ";"? ``` ``` keyExp = :key ":"? :expression ("as" :local)? ";"? ``` ``` let = "let" :local "=" :export ";"? ``` | Keyword | Details | | --- | --- | | `prefix` | HTML attribute key | | `key` | HTML attribute key | | `local` | Local variable name used in the template | | `export` | Value exported by the directive under a given name | | `expression` | Standard Angular expression | ### How Angular translates shorthand Angular translates structural directive shorthand into the normal binding syntax as follows: | Shorthand | Translation | | --- | --- | | `prefix` and naked `expression` | ``` [prefix]="expression" ``` | | `keyExp` | ``` [prefixKey] "expression" (let-prefixKey="export") ``` **NOTE**: The `prefix` is added to the `key` | | `let` | ``` let-local="export" ``` | ### Shorthand examples The following table provides shorthand examples: | Shorthand | How Angular interprets the syntax | | --- | --- | | ``` *ngFor="let item of [1,2,3]" ``` | ``` <ng-template ngFor              let-item              [ngForOf]="[1,2,3]"> ``` | | ``` *ngFor="let item of [1,2,3] as items;         trackBy: myTrack; index as i" ``` | ``` <ng-template ngFor              let-item              [ngForOf]="[1,2,3]"              let-items="ngForOf"              [ngForTrackBy]="myTrack"              let-i="index"> ``` | | ``` *ngIf="exp" ``` | ``` <ng-template [ngIf]="exp"> ``` | | ``` *ngIf="exp as value" ``` | ``` <ng-template [ngIf]="exp"              let-value="ngIf"> ``` | Improving template type checking for custom directives ------------------------------------------------------ You can improve template type checking for custom directives by adding template guard properties to your directive definition. These properties help the Angular template type checker find mistakes in the template at compile time, which can avoid runtime errors. These properties are as follows: * A property `ngTemplateGuard_(someInputProperty)` lets you specify a more accurate type for an input expression within the template * The `ngTemplateContextGuard` static property declares the type of the template context This section provides examples of both kinds of type-guard property. For more information, see [Template type checking](template-typecheck "Template type-checking guide"). ### Making in-template type requirements more specific with template guards A structural directive in a template controls whether that template is rendered at run time, based on its input expression. To help the compiler catch template type errors, you should specify as closely as possible the required type of a directive's input expression when it occurs inside the template. A type guard function narrows the expected type of an input expression to a subset of types that might be passed to the directive within the template at run time. You can provide such a function to help the type-checker infer the proper type for the expression at compile time. For example, the `[NgIf](../api/common/ngif)` implementation uses type-narrowing to ensure that the template is only instantiated if the input expression to `*[ngIf](../api/common/ngif)` is truthy. To provide the specific type requirement, the `[NgIf](../api/common/ngif)` directive defines a [static property `ngTemplateGuard_ngIf: 'binding'`](../api/common/ngif#static-properties). The `binding` value is a special case for a common kind of type-narrowing where the input expression is evaluated in order to satisfy the type requirement. To provide a more specific type for an input expression to a directive within the template, add an `ngTemplateGuard_xx` property to the directive, where the suffix to the static property name, `xx`, is the `@[Input](../api/core/input)()` field name. The value of the property can be either a general type-narrowing function based on its return type, or the string `"binding"`, as in the case of `[NgIf](../api/common/ngif)`. For example, consider the following structural directive that takes the result of a template expression as an input: ``` import { Directive, Input, TemplateRef, ViewContainerRef } from '@angular/core'; import { Loaded, LoadingState } from './loading-state'; @Directive({ selector: '[appIfLoaded]' }) export class IfLoadedDirective<T> { private isViewCreated = false; @Input('appIfLoaded') set state(state: LoadingState<T>) { if (!this.isViewCreated && state.type === 'loaded') { this.viewContainerRef.createEmbeddedView(this.templateRef); this.isViewCreated = true; } else if (this.isViewCreated && state.type !== 'loaded') { this.viewContainerRef.clear(); this.isViewCreated = false; } } constructor( private readonly viewContainerRef: ViewContainerRef, private readonly templateRef: TemplateRef<unknown> ) {} static ngTemplateGuard_appIfLoaded<T>( dir: IfLoadedDirective<T>, state: LoadingState<T> ): state is Loaded<T> { return true; } } ``` ``` export type Loaded<T> = { type: 'loaded', data: T }; export type Loading = { type: 'loading' }; export type LoadingState<T> = Loaded<T> | Loading; ``` ``` import { Component } from '@angular/core'; import { LoadingState } from './loading-state'; import { Hero, heroes } from './hero'; @Component({ selector: 'app-hero', template: ` <button (click)="onLoadHero()">Load Hero</button> <p *appIfLoaded="heroLoadingState">{{ heroLoadingState.data | json }}</p> `, }) export class HeroComponent { heroLoadingState: LoadingState<Hero> = { type: 'loading' }; onLoadHero(): void { this.heroLoadingState = { type: 'loaded', data: heroes[0] }; } } ``` In this example, the `LoadingState<T>` type permits either of two states, `Loaded<T>` or `Loading`. The expression used as the directive's `state` input (aliased as `appIfLoaded`) is of the umbrella type `LoadingState`, as it's unknown what the loading state is at that point. The `IfLoadedDirective` definition declares the static field `ngTemplateGuard_appIfLoaded`, which expresses the narrowing behavior. Within the `AppComponent` template, the `*appIfLoaded` structural directive should render this template only when `state` is actually `Loaded<Hero>`. The type guard lets the type checker infer that the acceptable type of `state` within the template is a `Loaded<T>`, and further infer that `T` must be an instance of `Hero`. ### Typing the directive's context If your structural directive provides a context to the instantiated template, you can properly type it inside the template by providing a static `ngTemplateContextGuard` function. The following snippet shows an example of such a function. ``` import { Directive, Input, TemplateRef, ViewContainerRef } from '@angular/core'; @Directive({ selector: '[appTrigonometry]' }) export class TrigonometryDirective { private isViewCreated = false; private readonly context = new TrigonometryContext(); @Input('appTrigonometry') set angle(angleInDegrees: number) { const angleInRadians = toRadians(angleInDegrees); this.context.sin = Math.sin(angleInRadians); this.context.cos = Math.cos(angleInRadians); this.context.tan = Math.tan(angleInRadians); if (!this.isViewCreated) { this.viewContainerRef.createEmbeddedView(this.templateRef, this.context); this.isViewCreated = true; } } constructor( private readonly viewContainerRef: ViewContainerRef, private readonly templateRef: TemplateRef<TrigonometryContext> ) {} // Make sure the template checker knows the type of the context with which the // template of this directive will be rendered static ngTemplateContextGuard( directive: TrigonometryDirective, context: unknown ): context is TrigonometryContext { return true; } } class TrigonometryContext { sin = 0; cos = 0; tan = 0; } function toRadians(degrees: number): number { return degrees * (Math.PI / 180); } ``` ``` <ul *appTrigonometry="30; sin as s; cos as c; tan as t"> <li>sin(30°): {{ s }}</li> <li>cos(30°): {{ c }}</li> <li>tan(30°): {{ t }}</li> </ul> ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular Class and style binding Class and style binding ======================= Use class and style bindings to add and remove CSS class names from an element's `class` attribute and to set styles dynamically. Prerequisites ------------- * [Property binding](property-binding) Binding to a single CSS `class` ------------------------------- To create a single class binding, type the following: `[class.sale]="onSale"` Angular adds the class when the bound expression, `onSale` is truthy, and it removes the class when the expression is falsy—with the exception of `undefined`. See [styling delegation](style-precedence#styling-delegation) for more information. Binding to multiple CSS classes ------------------------------- To bind to multiple classes, type the following: `[class]="classExpression"` The expression can be one of: * A space-delimited string of class names. * An object with class names as the keys and truthy or falsy expressions as the values. * An array of class names. With the object format, Angular adds a class only if its associated value is truthy. > With any object-like expression—such as `object`, `Array`, `Map`, or `Set` —the identity of the object must change for Angular to update the class list. Updating the property without changing object identity has no effect. > > If there are multiple bindings to the same class name, Angular uses [styling precedence](style-precedence) to determine which binding to use. The following table summarizes class binding syntax. | Binding Type | Syntax | Input Type | Example Input Values | | --- | --- | --- | --- | | Single class binding | `[class.sale]="onSale"` | `boolean | undefined | null` | `true`, `false` | | Multi-class binding | `[class]="classExpression"` | `string` | `"my-class-1 my-class-2 my-class-3"` | | Multi-class binding | `[class]="classExpression"` | `Record<string, boolean | undefined | null>` | `{foo: true, bar: false}` | | Multi-class binding | `[class]="classExpression"` | `Array<string>` | `['foo', 'bar']` | Binding to a single style ------------------------- To create a single style binding, use the prefix `[style](../api/animations/style)` followed by a dot and the name of the CSS style. For example, to set the `width` style, type the following: `[style.width]="width"` Angular sets the property to the value of the bound expression, which is usually a string. Optionally, you can add a unit extension like `em` or `%`, which requires a number type. 1. To write a style in dash-case, type the following: ``` <nav [style.background-color]="expression"></nav> ``` 2. To write a style in camelCase, type the following: ``` <nav [style.backgroundColor]="expression"></nav> ``` Binding to multiple styles -------------------------- To toggle multiple styles, bind to the `[[style](../api/animations/style)]` attribute—for example, `[[style](../api/animations/style)]="styleExpression"`. The `styleExpression` can be one of: * A string list of styles such as `"width: 100px; height: 100px; background-color: cornflowerblue;"`. * An object with style names as the keys and style values as the values, such as `{width: '100px', height: '100px', backgroundColor: 'cornflowerblue'}`. Note that binding an array to `[[style](../api/animations/style)]` is not supported. > When binding `[[style](../api/animations/style)]` to an object expression, the identity of the object must change for Angular to update the class list. Updating the property without changing object identity has no effect. > > ### Single and multiple-style binding example ``` @Component({ selector: 'app-nav-bar', template: ` <nav [style]='navStyle'> <a [style.text-decoration]="activeLinkStyle">Home Page</a> <a [style.text-decoration]="linkStyle">Login</a> </nav>` }) export class NavBarComponent { navStyle = 'font-size: 1.2rem; color: cornflowerblue;'; linkStyle = 'underline'; activeLinkStyle = 'overline'; /* . . . */ } ``` If there are multiple bindings to the same style attribute, Angular uses [styling precedence](style-precedence) to determine which binding to use. The following table summarizes style binding syntax. | Binding Type | Syntax | Input Type | Example Input Values | | --- | --- | --- | --- | | Single style binding | `[style.width]="width"` | `string | undefined | null` | `"100px"` | | Single style binding with units | `[style.width.px]="width"` | `number | undefined | null` | `100` | | Multi-style binding | `[[style](../api/animations/style)]="styleExpression"` | `string` | `"width: 100px; height: 100px"` | | Multi-style binding | `[[style](../api/animations/style)]="styleExpression"` | `Record<string, string | undefined | null>` | `{width: '100px', height: '100px'}` | Styling precedence ------------------ A single HTML element can have its CSS class list and style values bound to multiple sources (for example, host bindings from multiple directives). What’s next ----------- * [Component styles](component-styles) * [Introduction to Angular animations](animations) Last reviewed on Mon May 09 2022 angular Manage marked text with custom IDs Manage marked text with custom IDs ================================== The Angular extractor generates a file with a translation unit entry each of the following instances. * Each `i18n` attribute in a component template * Each [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string in component code As described in [How meanings control text extraction and merges](i18n-common-prepare#how-meanings-control-text-extraction-and-merges "How meanings control text extraction and merges - Prepare components for translations | Angular"), Angular assigns each translation unit a unique ID. The following example displays translation units with unique IDs. ``` <trans-unit id="ba0cc104d3d69bf669f97b8d96a4c5d8d9559aa3" datatype="html"> ``` When you change the translatable text, the extractor generates a new ID for that translation unit. In most cases, changes in the source text also require a change to the translation. Therefore, using a new ID keeps the text change in sync with translations. However, some translation systems require a specific form or syntax for the ID. To address the requirement, use a custom ID to mark text. Most developers don't need to use a custom ID. If you want to use a unique syntax to convey additional metadata, use a custom ID. Additional metadata may include the library, component, or area of the application in which the text appears. To specify a custom ID in the `i18n` attribute or [`$localize`](../api/localize/init/%24localize "$localize | init - localize - API | Angular") tagged message string, use the `@@` prefix. The following example defines the `introductionHeader` custom ID in a heading element. ``` <h1 i18n="@@introductionHeader">Hello i18n!</h1> ``` The following example defines the `introductionHeader` custom ID for a variable. ``` variableText1 = $localize `:@@introductionHeader:Hello i18n!`; ``` When you specify a custom ID, the extractor generates a translation unit with the custom ID. ``` <trans-unit id="introductionHeader" datatype="html"> ``` If you change the text, the extractor does not change the ID. As a result, you don't have to take the extra step to update the translation. The drawback of using custom IDs is that if you change the text, your translation may be out-of-sync with the newly changed source text. #### Use a custom ID with a description Use a custom ID in combination with a description and a meaning to further help the translator. The following example includes a description, followed by the custom ID. ``` <h1 i18n="An introduction header for this sample@@introductionHeader">Hello i18n!</h1> ``` The following example defines the `introductionHeader` custom ID and description for a variable. ``` variableText2 = $localize `:An introduction header for this sample@@introductionHeader:Hello i18n!`; ``` The following example adds a meaning. ``` <h1 i18n="site header|An introduction header for this sample@@introductionHeader">Hello i18n!</h1> ``` The following example defines the `introductionHeader` custom ID for a variable. ``` variableText3 = $localize `:site header|An introduction header for this sample@@introductionHeader:Hello i18n!`; ``` #### Define unique custom IDs Be sure to define custom IDs that are unique. If you use the same ID for two different text elements, the extraction tool extracts only the first one, and Angular uses the translation in place of both original text elements. For example, in the following code snippet the same `myId` custom ID is defined for two different text elements. ``` <h3 i18n="@@myId">Hello</h3> <!-- ... --> <p i18n="@@myId">Good bye</p> ``` The following displays the translation in French. ``` <trans-unit id="myId" datatype="html"> <source>Hello</source> <target state="new">Bonjour</target> </trans-unit> ``` Both elements now use the same translation (`Bonjour`), because both were defined with the same custom ID. ``` <h3>Bonjour</h3> <!-- ... --> <p>Bonjour</p> ``` Last reviewed on Mon Feb 28 2022 angular Basics of testing components Basics of testing components ============================ A component, unlike all other parts of an Angular application, combines an HTML template and a TypeScript class. The component truly is the template and the class *working together*. To adequately test a component, you should test that they work together as intended. Such tests require creating the component's host element in the browser DOM, as Angular does, and investigating the component class's interaction with the DOM as described by its template. The Angular `[TestBed](../api/core/testing/testbed)` facilitates this kind of testing as you'll see in the following sections. But in many cases, *testing the component class alone*, without DOM involvement, can validate much of the component's behavior in a straightforward, more obvious way. > If you'd like to experiment with the application that this guide describes, run it in your browser or download and run it locally. > > Component class testing ----------------------- Test a component class on its own as you would test a service class. Component class testing should be kept very clean and simple. It should test only a single unit. At first glance, you should be able to understand what the test is testing. Consider this `LightswitchComponent` which toggles a light on and off (represented by an on-screen message) when the user clicks the button. ``` @Component({ selector: 'lightswitch-comp', template: ` <button type="button" (click)="clicked()">Click me!</button> <span>{{message}}</span>` }) export class LightswitchComponent { isOn = false; clicked() { this.isOn = !this.isOn; } get message() { return `The light is ${this.isOn ? 'On' : 'Off'}`; } } ``` You might decide only to test that the `clicked()` method toggles the light's *on/off* state and sets the message appropriately. This component class has no dependencies. To test these types of classes, follow the same steps as you would for a service that has no dependencies: 1. Create a component using the new keyword. 2. Poke at its API. 3. Assert expectations on its public state. ``` describe('LightswitchComp', () => { it('#clicked() should toggle #isOn', () => { const comp = new LightswitchComponent(); expect(comp.isOn) .withContext('off at first') .toBe(false); comp.clicked(); expect(comp.isOn) .withContext('on after click') .toBe(true); comp.clicked(); expect(comp.isOn) .withContext('off after second click') .toBe(false); }); it('#clicked() should set #message to "is on"', () => { const comp = new LightswitchComponent(); expect(comp.message) .withContext('off at first') .toMatch(/is off/i); comp.clicked(); expect(comp.message) .withContext('on after clicked') .toMatch(/is on/i); }); }); ``` Here is the `DashboardHeroComponent` from the *Tour of Heroes* tutorial. ``` export class DashboardHeroComponent { @Input() hero!: Hero; @Output() selected = new EventEmitter<Hero>(); click() { this.selected.emit(this.hero); } } ``` It appears within the template of a parent component, which binds a *hero* to the `@[Input](../api/core/input)` property and listens for an event raised through the *selected* `@[Output](../api/core/output)` property. You can test that the class code works without creating the `DashboardHeroComponent` or its parent component. ``` it('raises the selected event when clicked', () => { const comp = new DashboardHeroComponent(); const hero: Hero = {id: 42, name: 'Test'}; comp.hero = hero; comp.selected.pipe(first()).subscribe((selectedHero: Hero) => expect(selectedHero).toBe(hero)); comp.click(); }); ``` When a component has dependencies, you might want to use the `[TestBed](../api/core/testing/testbed)` to both create the component and its dependencies. The following `WelcomeComponent` depends on the `UserService` to know the name of the user to greet. ``` export class WelcomeComponent implements OnInit { welcome = ''; constructor(private userService: UserService) { } ngOnInit(): void { this.welcome = this.userService.isLoggedIn ? 'Welcome, ' + this.userService.user.name : 'Please log in.'; } } ``` You might start by creating a mock of the `UserService` that meets the minimum needs of this component. ``` class MockUserService { isLoggedIn = true; user = { name: 'Test User'}; } ``` Then provide and inject *both the* **component** *and the service* in the `[TestBed](../api/core/testing/testbed)` configuration. ``` beforeEach(() => { TestBed.configureTestingModule({ // provide the component-under-test and dependent service providers: [ WelcomeComponent, { provide: UserService, useClass: MockUserService } ] }); // inject both the component and the dependent service. comp = TestBed.inject(WelcomeComponent); userService = TestBed.inject(UserService); }); ``` Then exercise the component class, remembering to call the [lifecycle hook methods](lifecycle-hooks) as Angular does when running the application. ``` it('should not have welcome message after construction', () => { expect(comp.welcome).toBe(''); }); it('should welcome logged in user after Angular calls ngOnInit', () => { comp.ngOnInit(); expect(comp.welcome).toContain(userService.user.name); }); it('should ask user to log in if not logged in after ngOnInit', () => { userService.isLoggedIn = false; comp.ngOnInit(); expect(comp.welcome).not.toContain(userService.user.name); expect(comp.welcome).toContain('log in'); }); ``` Component DOM testing --------------------- Testing the component *class* is as straightforward as [testing a service](testing-services). But a component is more than just its class. A component interacts with the DOM and with other components. The *class-only* tests can tell you about class behavior. They cannot tell you if the component is going to render properly, respond to user input and gestures, or integrate with its parent and child components. None of the preceding *class-only* tests can answer key questions about how the components actually behave on screen. * Is `Lightswitch.clicked()` bound to anything such that the user can invoke it? * Is the `Lightswitch.message` displayed? * Can the user actually select the hero displayed by `DashboardHeroComponent`? * Is the hero name displayed as expected (such as uppercase)? * Is the welcome message displayed by the template of `WelcomeComponent`? These might not be troubling questions for the preceding simple components illustrated. But many components have complex interactions with the DOM elements described in their templates, causing HTML to appear and disappear as the component state changes. To answer these kinds of questions, you have to create the DOM elements associated with the components, you must examine the DOM to confirm that component state displays properly at the appropriate times, and you must simulate user interaction with the screen to determine whether those interactions cause the component to behave as expected. To write these kinds of test, you'll use additional features of the `[TestBed](../api/core/testing/testbed)` as well as other testing helpers. ### CLI-generated tests The CLI creates an initial test file for you by default when you ask it to generate a new component. For example, the following CLI command generates a `BannerComponent` in the `app/banner` folder (with inline template and styles): ``` ng generate component banner --inline-template --inline-style --module app ``` It also generates an initial test file for the component, `banner-external.component.spec.ts`, that looks like this: ``` import { ComponentFixture, TestBed, waitForAsync } from '@angular/core/testing'; import { BannerComponent } from './banner.component'; describe('BannerComponent', () => { let component: BannerComponent; let fixture: ComponentFixture<BannerComponent>; beforeEach(waitForAsync(() => { TestBed.configureTestingModule({declarations: [BannerComponent]}).compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(BannerComponent); component = fixture.componentInstance; fixture.detectChanges(); }); it('should create', () => { expect(component).toBeDefined(); }); }); ``` > Because `compileComponents` is asynchronous, it uses the [`waitForAsync`](../api/core/testing/waitforasync) utility function imported from `@angular/core/testing`. > > Refer to the [waitForAsync](testing-components-scenarios#waitForAsync) section for more details. > > ### Reduce the setup Only the last three lines of this file actually test the component and all they do is assert that Angular can create the component. The rest of the file is boilerplate setup code anticipating more advanced tests that *might* become necessary if the component evolves into something substantial. You'll learn about these advanced test features in the following sections. For now, you can radically reduce this test file to a more manageable size: ``` describe('BannerComponent (minimal)', () => { it('should create', () => { TestBed.configureTestingModule({declarations: [BannerComponent]}); const fixture = TestBed.createComponent(BannerComponent); const component = fixture.componentInstance; expect(component).toBeDefined(); }); }); ``` In this example, the metadata object passed to `TestBed.configureTestingModule` simply declares `BannerComponent`, the component to test. ``` TestBed.configureTestingModule({declarations: [BannerComponent]}); ``` > There's no need to declare or import anything else. The default test module is pre-configured with something like the `[BrowserModule](../api/platform-browser/browsermodule)` from `@angular/platform-browser`. > > Later you'll call `TestBed.configureTestingModule()` with imports, providers, and more declarations to suit your testing needs. Optional `override` methods can further fine-tune aspects of the configuration. > > ### `[createComponent](../api/core/createcomponent)()` After configuring `[TestBed](../api/core/testing/testbed)`, you call its `[createComponent](../api/core/createcomponent)()` method. ``` const fixture = TestBed.createComponent(BannerComponent); ``` `TestBed.createComponent()` creates an instance of the `BannerComponent`, adds a corresponding element to the test-runner DOM, and returns a [`ComponentFixture`](testing-components-basics#component-fixture). > Do not re-configure `[TestBed](../api/core/testing/testbed)` after calling `[createComponent](../api/core/createcomponent)`. > > The `[createComponent](../api/core/createcomponent)` method freezes the current `[TestBed](../api/core/testing/testbed)` definition, closing it to further configuration. > > You cannot call any more `[TestBed](../api/core/testing/testbed)` configuration methods, not `configureTestingModule()`, nor `get()`, nor any of the `override...` methods. If you try, `[TestBed](../api/core/testing/testbed)` throws an error. > > ### `[ComponentFixture](../api/core/testing/componentfixture)` The [ComponentFixture](../api/core/testing/componentfixture) is a test harness for interacting with the created component and its corresponding element. Access the component instance through the fixture and confirm it exists with a Jasmine expectation: ``` const component = fixture.componentInstance; expect(component).toBeDefined(); ``` ### `beforeEach()` You will add more tests as this component evolves. Rather than duplicate the `[TestBed](../api/core/testing/testbed)` configuration for each test, you refactor to pull the setup into a Jasmine `beforeEach()` and some supporting variables: ``` describe('BannerComponent (with beforeEach)', () => { let component: BannerComponent; let fixture: ComponentFixture<BannerComponent>; beforeEach(() => { TestBed.configureTestingModule({declarations: [BannerComponent]}); fixture = TestBed.createComponent(BannerComponent); component = fixture.componentInstance; }); it('should create', () => { expect(component).toBeDefined(); }); }); ``` Now add a test that gets the component's element from `fixture.nativeElement` and looks for the expected text. ``` it('should contain "banner works!"', () => { const bannerElement: HTMLElement = fixture.nativeElement; expect(bannerElement.textContent).toContain('banner works!'); }); ``` ### `nativeElement` The value of `[ComponentFixture.nativeElement](../api/core/testing/componentfixture#nativeElement)` has the `any` type. Later you'll encounter the `[DebugElement.nativeElement](../api/core/debugelement#nativeElement)` and it too has the `any` type. Angular can't know at compile time what kind of HTML element the `nativeElement` is or if it even is an HTML element. The application might be running on a *non-browser platform*, such as the server or a [Web Worker](https://developer.mozilla.org/docs/Web/API/Web_Workers_API), where the element might have a diminished API or not exist at all. The tests in this guide are designed to run in a browser so a `nativeElement` value will always be an `HTMLElement` or one of its derived classes. Knowing that it is an `HTMLElement` of some sort, use the standard HTML `querySelector` to dive deeper into the element tree. Here's another test that calls `HTMLElement.querySelector` to get the paragraph element and look for the banner text: ``` it('should have <p> with "banner works!"', () => { const bannerElement: HTMLElement = fixture.nativeElement; const p = bannerElement.querySelector('p')!; expect(p.textContent).toEqual('banner works!'); }); ``` ### `[DebugElement](../api/core/debugelement)` The Angular *fixture* provides the component's element directly through the `fixture.nativeElement`. ``` const bannerElement: HTMLElement = fixture.nativeElement; ``` This is actually a convenience method, implemented as `fixture.debugElement.nativeElement`. ``` const bannerDe: DebugElement = fixture.debugElement; const bannerEl: HTMLElement = bannerDe.nativeElement; ``` There's a good reason for this circuitous path to the element. The properties of the `nativeElement` depend upon the runtime environment. You could be running these tests on a *non-browser* platform that doesn't have a DOM or whose DOM-emulation doesn't support the full `HTMLElement` API. Angular relies on the `[DebugElement](../api/core/debugelement)` abstraction to work safely across *all supported platforms*. Instead of creating an HTML element tree, Angular creates a `[DebugElement](../api/core/debugelement)` tree that wraps the *native elements* for the runtime platform. The `nativeElement` property unwraps the `[DebugElement](../api/core/debugelement)` and returns the platform-specific element object. Because the sample tests for this guide are designed to run only in a browser, a `nativeElement` in these tests is always an `HTMLElement` whose familiar methods and properties you can explore within a test. Here's the previous test, re-implemented with `fixture.debugElement.nativeElement`: ``` it('should find the <p> with fixture.debugElement.nativeElement)', () => { const bannerDe: DebugElement = fixture.debugElement; const bannerEl: HTMLElement = bannerDe.nativeElement; const p = bannerEl.querySelector('p')!; expect(p.textContent).toEqual('banner works!'); }); ``` The `[DebugElement](../api/core/debugelement)` has other methods and properties that are useful in tests, as you'll see elsewhere in this guide. You import the `[DebugElement](../api/core/debugelement)` symbol from the Angular core library. ``` import { DebugElement } from '@angular/core'; ``` ### `[By.css()](../api/platform-browser/by#css)` Although the tests in this guide all run in the browser, some applications might run on a different platform at least some of the time. For example, the component might render first on the server as part of a strategy to make the application launch faster on poorly connected devices. The server-side renderer might not support the full HTML element API. If it doesn't support `querySelector`, the previous test could fail. The `[DebugElement](../api/core/debugelement)` offers query methods that work for all supported platforms. These query methods take a *predicate* function that returns `true` when a node in the `[DebugElement](../api/core/debugelement)` tree matches the selection criteria. You create a *predicate* with the help of a `[By](../api/platform-browser/by)` class imported from a library for the runtime platform. Here's the `[By](../api/platform-browser/by)` import for the browser platform: ``` import { By } from '@angular/platform-browser'; ``` The following example re-implements the previous test with `[DebugElement.query()](../api/core/debugelement#query)` and the browser's `By.css` method. ``` it('should find the <p> with fixture.debugElement.query(By.css)', () => { const bannerDe: DebugElement = fixture.debugElement; const paragraphDe = bannerDe.query(By.css('p')); const p: HTMLElement = paragraphDe.nativeElement; expect(p.textContent).toEqual('banner works!'); }); ``` Some noteworthy observations: * The `[By.css()](../api/platform-browser/by#css)` static method selects `[DebugElement](../api/core/debugelement)` nodes with a [standard CSS selector](https://developer.mozilla.org/docs/Web/Guide/CSS/Getting_started/Selectors "CSS selectors"). * The query returns a `[DebugElement](../api/core/debugelement)` for the paragraph. * You must unwrap that result to get the paragraph element. When you're filtering by CSS selector and only testing properties of a browser's *native element*, the `By.css` approach might be overkill. It's often more straightforward and clear to filter with a standard `HTMLElement` method such as `querySelector()` or `querySelectorAll()`. Last reviewed on Mon Feb 28 2022
programming_docs
angular Review documentation Review documentation ==================== You can review the Angular documentation, even if you have never contributed to Angular before. Reviewing the Angular documentation provides a valuable contribution to the community. Finding and reporting issues in the documentation helps the community know that the content is up to date. Even if you don't find any problems, seeing that a document has been reviewed recently, gives readers confidence in the content. This topic describes how you can review and update the Angular documentation to help keep it up to date. #### To review a topic in angular.io Perform these steps in a browser. 1. [Find a topic to review](reviewing-content#find-topics-to-review) by: 1. Finding a topic with a **Last reviewed** date that is six months or more in the past. 2. Finding a topic that has no **Last reviewed** date. 3. Finding a topic that you've read recently. 2. Review the topic for errors or inaccuracies. 3. Complete the review. 1. If the topic looks good: 1. [Update or add the `@reviewed` entry](reviewing-content#update-the-last-reviewed-date) at the end of the topic's source code. 2. [Make a minor change to a documentation topic](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic) to publish the new reviewed date. 2. If you find an error that you don't feel comfortable fixing: 1. [Open a docs issue in GitHub](https://github.com/angular/angular/issues/new?assignees=&labels=&template=3-docs-bug.yaml). 2. [Update or add the `@reviewed` entry](reviewing-content#update-the-last-reviewed-date) at the end of the topic's source code. 3. [Make a minor change to a documentation topic](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic) to publish the new reviewed date. 3. If you find an error that needs only a minor change: 1. [Update or add the `@reviewed` entry](reviewing-content#update-the-last-reviewed-date) at the end of the topic's source code. 2. [Make a minor change to a documentation topic](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic) to fix the error and save the new reviewed date. 4. If you find an error that needs major changes: 1. Address the error: 1. [Make a major change](contributors-guide-overview#make-a-major-change), if you're comfortable, or 2. [Open a docs issue in GitHub](https://github.com/angular/angular/issues/new?assignees=&labels=&template=3-docs-bug.yaml). 2. Whether you fix the error or open a new issue, [update or add the `@reviewed` entry](reviewing-content#update-the-last-reviewed-date) at the end of the topic's source code. 3. [Make a minor change to a documentation topic](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic) to save the new reviewed date. Find topics to review --------------------- You can review any topic in the Angular documentation, but these are the topics that benefit most from your review. ### Topics that have not been reviewed in over six months At the bottom of some topics, there's a date that shows when the topic was last reviewed. If that date is over six months ago, the topic is ready for a review. This is an example of a **Last reviewed** date from the bottom of a topic. You can also see an example of this at the end of this topic. ### Topics that have never been reviewed If a topic doesn't have a **Last reviewed** date at the bottom, it has never been reviewed. You can review such a topic and add a new **Last reviewed** date after you review it. ### Topics that you know have a problem If you know of a topic that has an error or inaccuracy, you can review it and make corrections during your review. If you don't feel comfortable fixing an error during your review, [open a docs issue in GitHub](https://github.com/angular/angular/issues/new?assignees=&labels=&template=3-docs-bug.yaml). Be sure to add or update the **Last reviewed** date after you review the topic. Whether you fix the error or just open an issue, you still reviewed the topic. Update the last reviewed date ----------------------------- After you review a topic, whether you change it or not, update the topic's **Last reviewed** date. The **Last reviewed** text at the bottom of the topic is created by the `@reviewed` tag followed by the date you reviewed the topic. This is an example of an `@reviewed` tag at the end of the topic's source code as it appears in a code editor. ``` @reviewed 2022-09-08 ``` The date is formatted as `YYYY-MM-DD` where: * `YYYY` is the current year * `MM` is the two-digit number of the current month with a leading zero if the month is 01 (January) through 09 (September) * `DD` is the two-digit number of the current day of the month with a leading zero if the day is 01-09. For example: | Review date | `@reviewed` tag | Resulting text displayed in the docs | | --- | --- | --- | | January 12, 2023 | `@reviewed 2023-01-12` | *Last reviewed on Thu Jan 12, 2023* | | November 3, 2022 | `@reviewed 2022-11-03` | *Last reviewed on Fri Nov 03, 2022* | Reviewing and updating a topic ------------------------------ These are the actions you can take after you review a topic. ### The topic is accurate and has no errors If the topic is accurate and has no errors, [make a minor change](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic) to [update the **Last reviewed** date](reviewing-content#update-the-last-reviewed-date) at the bottom of the page. You can use the GitHub user interface to edit the topic's source code. ### The topic requires minor changes If the topic has minor errors, you can fix them when you [make a minor change](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic). Remember to [update the **Last reviewed** date](reviewing-content#update-the-last-reviewed-date) at the bottom of the page when you fix the error. For a minor change, you can use the GitHub user interface in a browser to edit the topic's source code. ### The topic requires major changes If the topic requires major changes, you can [make a major change](contributors-guide-overview#make-a-major-change), or [open a docs issue in GitHub](https://github.com/angular/angular/issues/new?assignees=&labels=&template=3-docs-bug.yaml). You shouldn't make major changes in the GitHub user interface because it doesn't allow you to test them before you submit them. Whether you make the changes the topic needs or open a docs issue, you should still [update the **Last reviewed** date](reviewing-content#update-the-last-reviewed-date). You can use the GitHub user interface in the browser if you only want to update the **Last reviewed** date. Last reviewed on Sun Dec 11 2022 angular Angular developer guides Angular developer guides ======================== As an application framework, Angular includes a collection of well-integrated libraries that cover a wide variety of features. The Angular libraries include routing, forms management, client-server communication, and more. This topic lists the various developer guides for you to learn more about these Angular features and to help you determine the correct use of each in your application. Prerequisites ------------- To get the most out of these developer guides, you should review the following topics: * [What is Angular](what-is-angular "What is Angular\? | Angular") * [Getting started tutorial](start "Getting started with Angular | Angular") * [Understanding Angular](understanding-angular-overview "Understanding Angular | Angular") Learn about Angular's features ------------------------------ [Routing and Navigation Learn how to use the Angular router to handle page navigation and other tasks. Router](routing-overview "Routing and navigation developer guide") [Forms Learn about the two approaches to forms in Angular: template-driven and reactive. Forms](forms-overview "Angular forms developer guide") [HTTP Learn how to connect to a server using the HTTP client service in Angular. HTTP client](http "Angular HTTP client developer guide") [Testing Learn about tips and techniques for testing Angular applications. Testing](testing "Angular testing developer guide") [Internationalization Learn how to localize your Angular application. i18n and $localize](i18n-overview "Angular internationalization developer guide") [Animations Learn about how to add an animation to your Angular application. Animations](animations "Angular animations developer guide") [Service Workers and PWA Learn about how to use a service worker to create a progressive web application. Service workers and PWA](service-worker-intro "Angular service worker developer guide") [Web Workers Learn more about how to use a web worker to run a CPU-intensive computation in a background thread. Web Workers](web-worker "Web Workers") [Server-side rendering Learn more about how to use Angular Universal to create a static application page. Server-side rendering](universal "Server-side rendering") [Pre-rendering Learn about how to use pre-rendering to process a dynamic page at build time. Pre-rendering](prerendering "Pre-rendering") Last reviewed on Fri Nov 05 2021 angular Format data based on locale Format data based on locale =========================== Angular provides the following built-in data transformation [pipes](glossary#pipe "pipe - Glossary | Angular"). The data transformation pipes use the [`LOCALE_ID`](../api/core/locale_id "LOCALE_ID | Core - API | Angular") token to format data based on rules of each locale. | Data transformation pipe | Details | | --- | --- | | [`DatePipe`](../api/common/datepipe "DatePipe | Common - API | Angular") | Formats a date value. | | [`CurrencyPipe`](../api/common/currencypipe "CurrencyPipe | Common - API | Angular") | Transforms a number into a currency string. | | [`DecimalPipe`](../api/common/decimalpipe "DecimalPipe | Common - API | Angular") | Transforms a number into a decimal number string. | | [`PercentPipe`](../api/common/percentpipe "PercentPipe | Common - API | Angular") | Transforms a number into a percentage string. | Use DatePipe to display the current date ---------------------------------------- To display the current date in the format for the current locale, use the following format for the `[DatePipe](../api/common/datepipe)`. ``` {{ today | date }} ``` Override current locale for CurrencyPipe ---------------------------------------- Add the `locale` parameter to the pipe to override the current value of `[LOCALE\_ID](../api/core/locale_id)` token. To force the currency to use American English (`en-US`), use the following format for the `[CurrencyPipe](../api/common/currencypipe)` ``` {{ amount | currency : 'en-US' }} ``` > **NOTE**: The locale specified for the `[CurrencyPipe](../api/common/currencypipe)` overrides the global `[LOCALE\_ID](../api/core/locale_id)` token of your application. > > What's next ----------- * [Prepare component for translation](i18n-common-prepare "Prepare component for translation | Angular") Last reviewed on Mon Feb 28 2022 angular How event binding works How event binding works ======================= In an event binding, Angular configures an event handler for the target event. You can use event binding with your own custom events. When the component or directive raises the event, the handler executes the template statement. The template statement performs an action in response to the event. Handling events --------------- A common way to handle events is to pass the event object, `$event`, to the method handling the event. The `$event` object often contains information the method needs, such as a user's name or an image URL. The target event determines the shape of the `$event` object. If the target event is a native DOM element event, then `$event` is a [DOM event object](https://developer.mozilla.org/docs/Web/Events), with properties such as `target` and `target.value`. In the following example the code sets the `<input>` `value` property by binding to the `name` property. ``` <input [value]="currentItem.name" (input)="currentItem.name=getValue($event)"> ``` With this example, the following actions occur: 1. The code binds to the `input` event of the `<input>` element, which allows the code to listen for changes. 2. When the user makes changes, the component raises the `input` event. 3. The binding executes the statement within a context that includes the DOM event object, `$event`. 4. Angular retrieves the changed text by calling `getValue($event.target)` and updates the `name` property. If the event belongs to a directive or component, `$event` has the shape that the directive or component produces. > The type of `$event.target` is only `EventTarget` in the template. In the `getValue()` method, the target is cast to an `HTMLInputElement` to allow type-safe access to its `value` property. > > > ``` > getValue(event: Event): string { > return (event.target as HTMLInputElement).value; > } > ``` > Last reviewed on Mon Feb 28 2022 angular App shell App shell ========= Application shell is a way to render a portion of your application using a route at build time. It can improve the user experience by quickly launching a static rendered page (a skeleton common to all pages) while the browser downloads the full client version and switches to it automatically after the code loads. This gives users a meaningful first paint of your application that appears quickly because the browser can render the HTML and CSS without the need to initialize any JavaScript. Learn more in [The App Shell Model](https://developers.google.com/web/fundamentals/architecture/app-shell). Step 1: Prepare the application ------------------------------- Do this with the following Angular CLI command: ``` ng new my-app --routing ``` For an existing application, you have to manually add the `[RouterModule](../api/router/routermodule)` and defining a `<[router-outlet](../api/router/routeroutlet)>` within your application. Step 2: Create the application shell ------------------------------------ Use the Angular CLI to automatically create the application shell. ``` ng generate app-shell ``` For more information about this command, see [App shell command](cli/generate#app-shell-command). After running this command you can see that the `angular.json` configuration file has been updated to add two new targets, with a few other changes. ``` "server": { "builder": "@angular-devkit/build-angular:server", "defaultConfiguration": "production", "options": { "outputPath": "dist/my-app/server", "main": "src/main.server.ts", "tsConfig": "tsconfig.server.json" }, "configurations": { "development": { "outputHashing": "none", }, "production": { "outputHashing": "media", "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.prod.ts" } ], "sourceMap": false, "optimization": true } } }, "app-shell": { "builder": "@angular-devkit/build-angular:app-shell", "defaultConfiguration": "production", "options": { "route": "shell" }, "configurations": { "development": { "browserTarget": "my-app:build:development", "serverTarget": "my-app:server:development", }, "production": { "browserTarget": "my-app:build:production", "serverTarget": "my-app:server:production" } } } ``` Step 3: Verify the application is built with the shell content -------------------------------------------------------------- Use the Angular CLI to build the `app-shell` target. ``` ng run my-app:app-shell:development ``` Or to use the production configuration. ``` ng run my-app:app-shell:production ``` To verify the build output, open `dist/my-app/browser/index.html`. Look for default text `app-shell works!` to show that the application shell route was rendered as part of the output. Last reviewed on Mon Feb 28 2022 angular Testing Utility APIs Testing Utility APIs ==================== This page describes the most useful Angular testing features. The Angular testing utilities include the `[TestBed](../api/core/testing/testbed)`, the `[ComponentFixture](../api/core/testing/componentfixture)`, and a handful of functions that control the test environment. The [`TestBed`](testing-utility-apis#testbed-api-summary) and [`ComponentFixture`](testing-utility-apis#component-fixture-api-summary) classes are covered separately. Here's a summary of the stand-alone functions, in order of likely utility: | Function | Details | | --- | --- | | `[waitForAsync](../api/core/testing/waitforasync)` | Runs the body of a test (`it`) or setup (`beforeEach`) function within a special *async test zone*. See [waitForAsync](testing-components-scenarios#waitForAsync). | | `[fakeAsync](../api/core/testing/fakeasync)` | Runs the body of a test (`it`) within a special *fakeAsync test zone*, enabling a linear control flow coding style. See [fakeAsync](testing-components-scenarios#fake-async). | | `[tick](../api/core/testing/tick)` | Simulates the passage of time and the completion of pending asynchronous activities by flushing both *timer* and *micro-task* queues within the *fakeAsync test zone*. The curious, dedicated reader might enjoy this lengthy blog post, ["*Tasks, microtasks, queues and schedules*"](https://jakearchibald.com/2015/tasks-microtasks-queues-and-schedules). Accepts an optional argument that moves the virtual clock forward by the specified number of milliseconds, clearing asynchronous activities scheduled within that timeframe. See [tick](testing-components-scenarios#tick). | | `inject` | Injects one or more services from the current `[TestBed](../api/core/testing/testbed)` injector into a test function. It cannot inject a service provided by the component itself. See discussion of the [debugElement.injector](testing-components-scenarios#get-injected-services). | | `[discardPeriodicTasks](../api/core/testing/discardperiodictasks)` | When a `[fakeAsync](../api/core/testing/fakeasync)()` test ends with pending timer event *tasks* (queued `setTimeOut` and `setInterval` callbacks), the test fails with a clear error message. In general, a test should end with no queued tasks. When pending timer tasks are expected, call `[discardPeriodicTasks](../api/core/testing/discardperiodictasks)` to flush the *task* queue and avoid the error. | | `[flushMicrotasks](../api/core/testing/flushmicrotasks)` | When a `[fakeAsync](../api/core/testing/fakeasync)()` test ends with pending *micro-tasks* such as unresolved promises, the test fails with a clear error message. In general, a test should wait for micro-tasks to finish. When pending microtasks are expected, call `[flushMicrotasks](../api/core/testing/flushmicrotasks)` to flush the *micro-task* queue and avoid the error. | | `[ComponentFixtureAutoDetect](../api/core/testing/componentfixtureautodetect)` | A provider token for a service that turns on [automatic change detection](testing-components-scenarios#automatic-change-detection). | | `[getTestBed](../api/core/testing/gettestbed)` | Gets the current instance of the `[TestBed](../api/core/testing/testbed)`. Usually unnecessary because the static class methods of the `[TestBed](../api/core/testing/testbed)` class are typically sufficient. The `[TestBed](../api/core/testing/testbed)` instance exposes a few rarely used members that are not available as static methods. | `[TestBed](../api/core/testing/testbed)` class summary ------------------------------------------------------- The `[TestBed](../api/core/testing/testbed)` class is one of the principal Angular testing utilities. Its API is quite large and can be overwhelming until you've explored it, a little at a time. Read the early part of this guide first to get the basics before trying to absorb the full API. The module definition passed to `configureTestingModule` is a subset of the `@[NgModule](../api/core/ngmodule)` metadata properties. ``` type TestModuleMetadata = { providers?: any[]; declarations?: any[]; imports?: any[]; schemas?: Array<SchemaMetadata | any[]>; }; ``` Each override method takes a `[MetadataOverride](../api/core/testing/metadataoverride)<T>` where `T` is the kind of metadata appropriate to the method, that is, the parameter of an `@[NgModule](../api/core/ngmodule)`, `@[Component](../api/core/component)`, `@[Directive](../api/core/directive)`, or `@[Pipe](../api/core/pipe)`. ``` type MetadataOverride<T> = { add?: Partial<T>; remove?: Partial<T>; set?: Partial<T>; }; ``` The `[TestBed](../api/core/testing/testbed)` API consists of static class methods that either update or reference a *global* instance of the `[TestBed](../api/core/testing/testbed)`. Internally, all static methods cover methods of the current runtime `[TestBed](../api/core/testing/testbed)` instance, which is also returned by the `[getTestBed](../api/core/testing/gettestbed)()` function. Call `[TestBed](../api/core/testing/testbed)` methods *within* a `beforeEach()` to ensure a fresh start before each individual test. Here are the most important static methods, in order of likely utility. | Methods | Details | | --- | --- | | `configureTestingModule` | The testing shims (`karma-test-shim`, `browser-test-shim`) establish the [initial test environment](testing) and a default testing module. The default testing module is configured with basic declaratives and some Angular service substitutes that every tester needs. Call `configureTestingModule` to refine the testing module configuration for a particular set of tests by adding and removing imports, declarations (of components, directives, and pipes), and providers. | | `compileComponents` | Compile the testing module asynchronously after you've finished configuring it. You **must** call this method if *any* of the testing module components have a `templateUrl` or `styleUrls` because fetching component template and style files is necessarily asynchronous. See [compileComponents](testing-components-scenarios#compile-components). After calling `compileComponents`, the `[TestBed](../api/core/testing/testbed)` configuration is frozen for the duration of the current spec. | | `[createComponent](../api/core/createcomponent)<T>` | Create an instance of a component of type `T` based on the current `[TestBed](../api/core/testing/testbed)` configuration. After calling `[createComponent](../api/core/createcomponent)`, the `[TestBed](../api/core/testing/testbed)` configuration is frozen for the duration of the current spec. | | `overrideModule` | Replace metadata for the given `[NgModule](../api/core/ngmodule)`. Recall that modules can import other modules. The `overrideModule` method can reach deeply into the current testing module to modify one of these inner modules. | | `overrideComponent` | Replace metadata for the given component class, which could be nested deeply within an inner module. | | `overrideDirective` | Replace metadata for the given directive class, which could be nested deeply within an inner module. | | `overridePipe` | Replace metadata for the given pipe class, which could be nested deeply within an inner module. | | `inject` | Retrieve a service from the current `[TestBed](../api/core/testing/testbed)` injector. The `inject` function is often adequate for this purpose. But `inject` throws an error if it can't provide the service. What if the service is optional? The `TestBed.inject()` method takes an optional second parameter, the object to return if Angular can't find the provider (`null` in this example): ``` expect(TestBed.inject(NotProvided, null)).toBeNull(); ``` After calling `TestBed.inject`, the `[TestBed](../api/core/testing/testbed)` configuration is frozen for the duration of the current spec. | | `initTestEnvironment` | Initialize the testing environment for the entire test run. The testing shims (`karma-test-shim`, `browser-test-shim`) call it for you so there is rarely a reason for you to call it yourself. Call this method *exactly once*. To change this default in the middle of a test run, call `resetTestEnvironment` first. Specify the Angular compiler factory, a `[PlatformRef](../api/core/platformref)`, and a default Angular testing module. Alternatives for non-browser platforms are available in the general form `@angular/platform-<platform_name>/testing/<platform_name>`. | | `resetTestEnvironment` | Reset the initial test environment, including the default testing module. | A few of the `[TestBed](../api/core/testing/testbed)` instance methods are not covered by static `[TestBed](../api/core/testing/testbed)` *class* methods. These are rarely needed. The `[ComponentFixture](../api/core/testing/componentfixture)` -------------------------------------------------------------- The `TestBed.createComponent<T>` creates an instance of the component `T` and returns a strongly typed `[ComponentFixture](../api/core/testing/componentfixture)` for that component. The `[ComponentFixture](../api/core/testing/componentfixture)` properties and methods provide access to the component, its DOM representation, and aspects of its Angular environment. ### `[ComponentFixture](../api/core/testing/componentfixture)` properties Here are the most important properties for testers, in order of likely utility. | Properties | Details | | --- | --- | | `componentInstance` | The instance of the component class created by `TestBed.createComponent`. | | `debugElement` | The `[DebugElement](../api/core/debugelement)` associated with the root element of the component. The `debugElement` provides insight into the component and its DOM element during test and debugging. It's a critical property for testers. The most interesting members are covered [below](testing-utility-apis#debug-element-details). | | `nativeElement` | The native DOM element at the root of the component. | | `changeDetectorRef` | The `[ChangeDetectorRef](../api/core/changedetectorref)` for the component. The `[ChangeDetectorRef](../api/core/changedetectorref)` is most valuable when testing a component that has the `[ChangeDetectionStrategy.OnPush](../api/core/changedetectionstrategy#OnPush)` method or the component's change detection is under your programmatic control. | ### `[ComponentFixture](../api/core/testing/componentfixture)` methods The *fixture* methods cause Angular to perform certain tasks on the component tree. Call these method to trigger Angular behavior in response to simulated user action. Here are the most useful methods for testers. | Methods | Details | | --- | --- | | `detectChanges` | Trigger a change detection cycle for the component. Call it to initialize the component (it calls `ngOnInit`) and after your test code, change the component's data bound property values. Angular can't see that you've changed `personComponent.name` and won't update the `name` binding until you call `detectChanges`. Runs `checkNoChanges` afterwards to confirm that there are no circular updates unless called as `detectChanges(false)`; | | `autoDetectChanges` | Set this to `true` when you want the fixture to detect changes automatically. When autodetect is `true`, the test fixture calls `detectChanges` immediately after creating the component. Then it listens for pertinent zone events and calls `detectChanges` accordingly. When your test code modifies component property values directly, you probably still have to call `fixture.detectChanges` to trigger data binding updates. The default is `false`. Testers who prefer fine control over test behavior tend to keep it `false`. | | `checkNoChanges` | Do a change detection run to make sure there are no pending changes. Throws an exceptions if there are. | | `isStable` | If the fixture is currently *stable*, returns `true`. If there are async tasks that have not completed, returns `false`. | | `whenStable` | Returns a promise that resolves when the fixture is stable. To resume testing after completion of asynchronous activity or asynchronous change detection, hook that promise. See [whenStable](testing-components-scenarios#when-stable). | | `destroy` | Trigger component destruction. | #### `[DebugElement](../api/core/debugelement)` The `[DebugElement](../api/core/debugelement)` provides crucial insights into the component's DOM representation. From the test root component's `[DebugElement](../api/core/debugelement)` returned by `fixture.debugElement`, you can walk (and query) the fixture's entire element and component subtrees. Here are the most useful `[DebugElement](../api/core/debugelement)` members for testers, in approximate order of utility: | Members | Details | | --- | --- | | `nativeElement` | The corresponding DOM element in the browser (null for WebWorkers). | | `[query](../api/animations/query)` | Calling `[query](../api/animations/query)(predicate: [Predicate](../api/core/predicate)<[DebugElement](../api/core/debugelement)>)` returns the first `[DebugElement](../api/core/debugelement)` that matches the [predicate](testing-utility-apis#query-predicate) at any depth in the subtree. | | `queryAll` | Calling `queryAll(predicate: [Predicate](../api/core/predicate)<[DebugElement](../api/core/debugelement)>)` returns all `DebugElements` that matches the [predicate](testing-utility-apis#query-predicate) at any depth in subtree. | | `injector` | The host dependency injector. For example, the root element's component instance injector. | | `componentInstance` | The element's own component instance, if it has one. | | `context` | An object that provides parent context for this element. Often an ancestor component instance that governs this element. When an element is repeated within `*[ngFor](../api/common/ngfor)`, the context is an `[NgForOf](../api/common/ngforof)` whose `$implicit` property is the value of the row instance value. For example, the `hero` in `*[ngFor](../api/common/ngfor)="let hero of heroes"`. | | `children` | The immediate `[DebugElement](../api/core/debugelement)` children. Walk the tree by descending through `children`. `[DebugElement](../api/core/debugelement)` also has `childNodes`, a list of `[DebugNode](../api/core/debugnode)` objects. `[DebugElement](../api/core/debugelement)` derives from `[DebugNode](../api/core/debugnode)` objects and there are often more nodes than elements. Testers can usually ignore plain nodes. | | `parent` | The `[DebugElement](../api/core/debugelement)` parent. Null if this is the root element. | | `name` | The element tag name, if it is an element. | | `triggerEventHandler` | Triggers the event by its name if there is a corresponding listener in the element's `listeners` collection. The second parameter is the *event object* expected by the handler. See [triggerEventHandler](testing-components-scenarios#trigger-event-handler). If the event lacks a listener or there's some other problem, consider calling `nativeElement.dispatchEvent(eventObject)`. | | `listeners` | The callbacks attached to the component's `@[Output](../api/core/output)` properties and/or the element's event properties. | | `providerTokens` | This component's injector lookup tokens. Includes the component itself plus the tokens that the component lists in its `providers` metadata. | | `source` | Where to find this element in the source component template. | | `references` | Dictionary of objects associated with template local variables (for example, `#foo`), keyed by the local variable name. | The `DebugElement.query(predicate)` and `DebugElement.queryAll(predicate)` methods take a predicate that filters the source element's subtree for matching `[DebugElement](../api/core/debugelement)`. The predicate is any method that takes a `[DebugElement](../api/core/debugelement)` and returns a *truthy* value. The following example finds all `DebugElements` with a reference to a template local variable named "content": ``` // Filter for DebugElements with a #content reference const contentRefs = el.queryAll( de => de.references['content']); ``` The Angular `[By](../api/platform-browser/by)` class has three static methods for common predicates: | Static method | Details | | --- | --- | | `By.all` | Return all elements | | `By.css(selector)` | Return elements with matching CSS selectors | | `By.directive(directive)` | Return elements that Angular matched to an instance of the directive class | ``` // Can find DebugElement either by css selector or by directive const h2 = fixture.debugElement.query(By.css('h2')); const directive = fixture.debugElement.query(By.directive(HighlightDirective)); ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular Service worker communication Service worker communication ============================ Importing `[ServiceWorkerModule](../api/service-worker/serviceworkermodule)` into your `AppModule` doesn't just register the service worker, it also provides a few services you can use to interact with the service worker and control the caching of your application. Prerequisites ------------- A basic understanding of the following: * [Getting Started with Service Workers](service-worker-getting-started) `[SwUpdate](../api/service-worker/swupdate)` service ----------------------------------------------------- The `[SwUpdate](../api/service-worker/swupdate)` service gives you access to events that indicate when the service worker discovers and installs an available update for your application. The `[SwUpdate](../api/service-worker/swupdate)` service supports three separate operations: * Get notified when an updated version is *detected* on the server, *installed and ready* to be used locally or when an *installation fails* * Ask the service worker to check the server for new updates * Ask the service worker to activate the latest version of the application for the current tab ### Version updates The `versionUpdates` is an `Observable` property of `[SwUpdate](../api/service-worker/swupdate)` and emits four event types: | Event types | Details | | --- | --- | | `[VersionDetectedEvent](../api/service-worker/versiondetectedevent)` | Emitted when the service worker has detected a new version of the app on the server and is about to start downloading it. | | `[NoNewVersionDetectedEvent](../api/service-worker/nonewversiondetectedevent)` | Emitted when the service worker has checked the version of the app on the server and did not find a new version. | | `[VersionReadyEvent](../api/service-worker/versionreadyevent)` | Emitted when a new version of the app is available to be activated by clients. It may be used to notify the user of an available update or prompt them to refresh the page. | | `[VersionInstallationFailedEvent](../api/service-worker/versioninstallationfailedevent)` | Emitted when the installation of a new version failed. It may be used for logging/monitoring purposes. | ``` @Injectable() export class LogUpdateService { constructor(updates: SwUpdate) { updates.versionUpdates.subscribe(evt => { switch (evt.type) { case 'VERSION_DETECTED': console.log(`Downloading new app version: ${evt.version.hash}`); break; case 'VERSION_READY': console.log(`Current app version: ${evt.currentVersion.hash}`); console.log(`New app version ready for use: ${evt.latestVersion.hash}`); break; case 'VERSION_INSTALLATION_FAILED': console.log(`Failed to install app version '${evt.version.hash}': ${evt.error}`); break; } }); } } ``` ### Checking for updates It's possible to ask the service worker to check if any updates have been deployed to the server. The service worker checks for updates during initialization and on each navigation request —that is, when the user navigates from a different address to your application. However, you might choose to manually check for updates if you have a site that changes frequently or want updates to happen on a schedule. Do this with the `checkForUpdate()` method: ``` import { ApplicationRef, Injectable } from '@angular/core'; import { SwUpdate } from '@angular/service-worker'; import { concat, interval } from 'rxjs'; import { first } from 'rxjs/operators'; @Injectable() export class CheckForUpdateService { constructor(appRef: ApplicationRef, updates: SwUpdate) { // Allow the app to stabilize first, before starting // polling for updates with `interval()`. const appIsStable$ = appRef.isStable.pipe(first(isStable => isStable === true)); const everySixHours$ = interval(6 * 60 * 60 * 1000); const everySixHoursOnceAppIsStable$ = concat(appIsStable$, everySixHours$); everySixHoursOnceAppIsStable$.subscribe(async () => { try { const updateFound = await updates.checkForUpdate(); console.log(updateFound ? 'A new version is available.' : 'Already on the latest version.'); } catch (err) { console.error('Failed to check for updates:', err); } }); } } ``` This method returns a `Promise<boolean>` which indicates if an update is available for activation. The check might fail, which will cause a rejection of the `Promise`. > In order to avoid negatively affecting the initial rendering of the page, `[ServiceWorkerModule](../api/service-worker/serviceworkermodule)` waits for up to 30 seconds by default for the application to stabilize, before registering the ServiceWorker script. Constantly polling for updates, for example, with [setInterval()](https://developer.mozilla.org/docs/Web/API/WindowOrWorkerGlobalScope/setInterval) or RxJS' [interval()](https://rxjs.dev/api/index/function/interval), prevents the application from stabilizing and the ServiceWorker script is not registered with the browser until the 30 seconds upper limit is reached. > > > > **NOTE**: This is true for any kind of polling done by your application. Check the [isStable](../api/core/applicationref#isStable) documentation for more information. > > > > > > Avoid that delay by waiting for the application to stabilize first, before starting to poll for updates, as shown in the preceding example. Alternatively, you might want to define a different [registration strategy](../api/service-worker/swregistrationoptions#registrationStrategy) for the ServiceWorker. > > ### Updating to the latest version You can update an existing tab to the latest version by reloading the page as soon as a new version is ready. To avoid disrupting the user's progress, it is generally a good idea to prompt the user and let them confirm that it is OK to reload the page and update to the latest version: ``` @Injectable() export class PromptUpdateService { constructor(swUpdate: SwUpdate) { swUpdate.versionUpdates .pipe(filter((evt): evt is VersionReadyEvent => evt.type === 'VERSION_READY')) .subscribe(evt => { if (promptUser(evt)) { // Reload the page to update to the latest version. document.location.reload(); } }); } } ``` > Calling [SwUpdate#activateUpdate()](../api/service-worker/swupdate#activateUpdate) updates a tab to the latest version without reloading the page, but this could break the application. > > Updating without reloading can create a version mismatch between the [application shell](glossary#app-shell) and other page resources, such as [lazy-loaded chunks](glossary#lazy-loading), whose filenames may change between versions. > > You should only use `activateUpdate()`, if you are certain it is safe for your specific use case. > > ### Handling an unrecoverable state In some cases, the version of the application used by the service worker to serve a client might be in a broken state that cannot be recovered from without a full page reload. For example, imagine the following scenario: * A user opens the application for the first time and the service worker caches the latest version of the application. Assume the application's cached assets include `index.html`, `main.<main-hash-1>.js` and `lazy-chunk.<lazy-hash-1>.js`. * The user closes the application and does not open it for a while. * After some time, a new version of the application is deployed to the server. This newer version includes the files `index.html`, `main.<main-hash-2>.js` and `lazy-chunk.<lazy-hash-2>.js`. > **NOTE**: The hashes are different now, because the content of the files changed. > > The old version is no longer available on the server. * In the meantime, the user's browser decides to evict `lazy-chunk.<lazy-hash-1>.js` from its cache. Browsers might decide to evict specific (or all) resources from a cache in order to reclaim disk space. * The user opens the application again. The service worker serves the latest version known to it at this point, namely the old version (`index.html` and `main.<main-hash-1>.js`). * At some later point, the application requests the lazy bundle, `lazy-chunk.<lazy-hash-1>.js`. * The service worker is unable to find the asset in the cache (remember that the browser evicted it). Nor is it able to retrieve it from the server (because the server now only has `lazy-chunk.<lazy-hash-2>.js` from the newer version). In the preceding scenario, the service worker is not able to serve an asset that would normally be cached. That particular application version is broken and there is no way to fix the state of the client without reloading the page. In such cases, the service worker notifies the client by sending an `[UnrecoverableStateEvent](../api/service-worker/unrecoverablestateevent)` event. Subscribe to `[SwUpdate](../api/service-worker/swupdate)#unrecoverable` to be notified and handle these errors. ``` @Injectable() export class HandleUnrecoverableStateService { constructor(updates: SwUpdate) { updates.unrecoverable.subscribe(event => { notifyUser( 'An error occurred that we cannot recover from:\n' + event.reason + '\n\nPlease reload the page.' ); }); } } ``` More on Angular service workers ------------------------------- You might also be interested in the following: * [Service Worker Notifications](service-worker-notifications) Last reviewed on Mon Feb 28 2022 angular Validating form input Validating form input ===================== You can improve overall data quality by validating user input for accuracy and completeness. This page shows how to validate user input from the UI and display useful validation messages, in both reactive and template-driven forms. Prerequisites ------------- Before reading about form validation, you should have a basic understanding of the following. * [TypeScript](https://www.typescriptlang.org/ "The TypeScript language") and HTML5 programming * Fundamental concepts of [Angular application design](architecture "Introduction to Angular application-design concepts") * The [two types of forms that Angular supports](forms-overview "Introduction to Angular forms") * Basics of either [Template-driven Forms](forms "Template-driven forms guide") or [Reactive Forms](reactive-forms "Reactive forms guide") > Get the complete example code for the reactive and template-driven forms used here to illustrate form validation. Run the live example. > > Validating input in template-driven forms ----------------------------------------- To add validation to a template-driven form, you add the same validation attributes as you would with [native HTML form validation](https://developer.mozilla.org/docs/Web/Guide/HTML/HTML5/Constraint_validation). Angular uses directives to match these attributes with validator functions in the framework. Every time the value of a form control changes, Angular runs validation and generates either a list of validation errors that results in an `INVALID` status, or null, which results in a VALID status. You can then inspect the control's state by exporting `[ngModel](../api/forms/ngmodel)` to a local template variable. The following example exports `[NgModel](../api/forms/ngmodel)` into a variable called `name`: ``` <input type="text" id="name" name="name" class="form-control" required minlength="4" appForbiddenName="bob" [(ngModel)]="hero.name" #name="ngModel"> <div *ngIf="name.invalid && (name.dirty || name.touched)" class="alert"> <div *ngIf="name.errors?.['required']"> Name is required. </div> <div *ngIf="name.errors?.['minlength']"> Name must be at least 4 characters long. </div> <div *ngIf="name.errors?.['forbiddenName']"> Name cannot be Bob. </div> </div> ``` Notice the following features illustrated by the example. * The `<input>` element carries the HTML validation attributes: `required` and `[minlength](../api/forms/minlengthvalidator)`. It also carries a custom validator directive, `forbiddenName`. For more information, see the [Custom validators](form-validation#custom-validators) section. * `#name="[ngModel](../api/forms/ngmodel)"` exports `[NgModel](../api/forms/ngmodel)` into a local variable called `name`. `[NgModel](../api/forms/ngmodel)` mirrors many of the properties of its underlying `[FormControl](../api/forms/formcontrol)` instance, so you can use this in the template to check for control states such as `valid` and `dirty`. For a full list of control properties, see the [AbstractControl](../api/forms/abstractcontrol) API reference. + The `*[ngIf](../api/common/ngif)` on the `<div>` element reveals a set of nested message `divs` but only if the `name` is invalid and the control is either `dirty` or `touched`. + Each nested `<div>` can present a custom message for one of the possible validation errors. There are messages for `required`, `[minlength](../api/forms/minlengthvalidator)`, and `forbiddenName`. > To prevent the validator from displaying errors before the user has a chance to edit the form, you should check for either the `dirty` or `touched` states in a control. > > * When the user changes the value in the watched field, the control is marked as "dirty" > * When the user blurs the form control element, the control is marked as "touched" > > Validating input in reactive forms ---------------------------------- In a reactive form, the source of truth is the component class. Instead of adding validators through attributes in the template, you add validator functions directly to the form control model in the component class. Angular then calls these functions whenever the value of the control changes. ### Validator functions Validator functions can be either synchronous or asynchronous. | Validator type | Details | | --- | --- | | Sync validators | Synchronous functions that take a control instance and immediately return either a set of validation errors or `null`. Pass these in as the second argument when you instantiate a `[FormControl](../api/forms/formcontrol)`. | | Async validators | Asynchronous functions that take a control instance and return a Promise or Observable that later emits a set of validation errors or `null`. Pass these in as the third argument when you instantiate a `[FormControl](../api/forms/formcontrol)`. | For performance reasons, Angular only runs async validators if all sync validators pass. Each must complete before errors are set. ### Built-in validator functions You can choose to [write your own validator functions](form-validation#custom-validators), or you can use some of Angular's built-in validators. The same built-in validators that are available as attributes in template-driven forms, such as `required` and `[minlength](../api/forms/minlengthvalidator)`, are all available to use as functions from the `[Validators](../api/forms/validators)` class. For a full list of built-in validators, see the [Validators](../api/forms/validators) API reference. To update the hero form to be a reactive form, use some of the same built-in validators —this time, in function form, as in the following example. ``` ngOnInit(): void { this.heroForm = new FormGroup({ name: new FormControl(this.hero.name, [ Validators.required, Validators.minLength(4), forbiddenNameValidator(/bob/i) // <-- Here's how you pass in the custom validator. ]), alterEgo: new FormControl(this.hero.alterEgo), power: new FormControl(this.hero.power, Validators.required) }); } get name() { return this.heroForm.get('name'); } get power() { return this.heroForm.get('power'); } ``` In this example, the `name` control sets up two built-in validators —`Validators.required` and `Validators.minLength(4)`— and one custom validator, `forbiddenNameValidator`. (For more details see [custom validators](form-validation#custom-validators).) All of these validators are synchronous, so they are passed as the second argument. Notice that you can support multiple validators by passing the functions in as an array. This example also adds a few getter methods. In a reactive form, you can always access any form control through the `get` method on its parent group, but sometimes it's useful to define getters as shorthand for the template. If you look at the template for the `name` input again, it is fairly similar to the template-driven example. ``` <input type="text" id="name" class="form-control" formControlName="name" required> <div *ngIf="name.invalid && (name.dirty || name.touched)" class="alert alert-danger"> <div *ngIf="name.errors?.['required']"> Name is required. </div> <div *ngIf="name.errors?.['minlength']"> Name must be at least 4 characters long. </div> <div *ngIf="name.errors?.['forbiddenName']"> Name cannot be Bob. </div> </div> ``` This form differs from the template-driven version in that it no longer exports any directives. Instead, it uses the `name` getter defined in the component class. Notice that the `required` attribute is still present in the template. Although it's not necessary for validation, it should be retained to for accessibility purposes. Defining custom validators -------------------------- The built-in validators don't always match the exact use case of your application, so you sometimes need to create a custom validator. Consider the `forbiddenNameValidator` function from previous [reactive-form examples](form-validation#reactive-component-class). Here's what the definition of that function looks like. ``` /** A hero's name can't match the given regular expression */ export function forbiddenNameValidator(nameRe: RegExp): ValidatorFn { return (control: AbstractControl): ValidationErrors | null => { const forbidden = nameRe.test(control.value); return forbidden ? {forbiddenName: {value: control.value}} : null; }; } ``` The function is a factory that takes a regular expression to detect a *specific* forbidden name and returns a validator function. In this sample, the forbidden name is "bob", so the validator rejects any hero name containing "bob". Elsewhere it could reject "alice" or any name that the configuring regular expression matches. The `forbiddenNameValidator` factory returns the configured validator function. That function takes an Angular control object and returns *either* null if the control value is valid *or* a validation error object. The validation error object typically has a property whose name is the validation key, `'forbiddenName'`, and whose value is an arbitrary dictionary of values that you could insert into an error message, `{name}`. Custom async validators are similar to sync validators, but they must instead return a Promise or observable that later emits null or a validation error object. In the case of an observable, the observable must complete, at which point the form uses the last value emitted for validation. ### Adding custom validators to reactive forms In reactive forms, add a custom validator by passing the function directly to the `[FormControl](../api/forms/formcontrol)`. ``` this.heroForm = new FormGroup({ name: new FormControl(this.hero.name, [ Validators.required, Validators.minLength(4), forbiddenNameValidator(/bob/i) // <-- Here's how you pass in the custom validator. ]), alterEgo: new FormControl(this.hero.alterEgo), power: new FormControl(this.hero.power, Validators.required) }); ``` ### Adding custom validators to template-driven forms In template-driven forms, add a directive to the template, where the directive wraps the validator function. For example, the corresponding `ForbiddenValidatorDirective` serves as a wrapper around the `forbiddenNameValidator`. Angular recognizes the directive's role in the validation process because the directive registers itself with the `[NG\_VALIDATORS](../api/forms/ng_validators)` provider, as shown in the following example. `[NG\_VALIDATORS](../api/forms/ng_validators)` is a predefined provider with an extensible collection of validators. ``` providers: [{provide: NG_VALIDATORS, useExisting: ForbiddenValidatorDirective, multi: true}] ``` The directive class then implements the `[Validator](../api/forms/validator)` interface, so that it can easily integrate with Angular forms. Here is the rest of the directive to help you get an idea of how it all comes together. ``` @Directive({ selector: '[appForbiddenName]', providers: [{provide: NG_VALIDATORS, useExisting: ForbiddenValidatorDirective, multi: true}] }) export class ForbiddenValidatorDirective implements Validator { @Input('appForbiddenName') forbiddenName = ''; validate(control: AbstractControl): ValidationErrors | null { return this.forbiddenName ? forbiddenNameValidator(new RegExp(this.forbiddenName, 'i'))(control) : null; } } ``` Once the `ForbiddenValidatorDirective` is ready, you can add its selector, `appForbiddenName`, to any input element to activate it. For example: ``` <input type="text" id="name" name="name" class="form-control" required minlength="4" appForbiddenName="bob" [(ngModel)]="hero.name" #name="ngModel"> ``` > Notice that the custom validation directive is instantiated with `useExisting` rather than `useClass`. The registered validator must be *this instance* of the `ForbiddenValidatorDirective` —the instance in the form with its `forbiddenName` property bound to "bob". > > If you were to replace `useExisting` with `useClass`, then you'd be registering a new class instance, one that doesn't have a `forbiddenName`. > > Control status CSS classes -------------------------- Angular automatically mirrors many control properties onto the form control element as CSS classes. Use these classes to style form control elements according to the state of the form. The following classes are currently supported. * `.ng-valid` * `.ng-invalid` * `.ng-pending` * `.ng-pristine` * `.ng-dirty` * `.ng-untouched` * `.ng-touched` * `.ng-submitted` (enclosing form element only) In the following example, the hero form uses the `.ng-valid` and `.ng-invalid` classes to set the color of each form control's border. ``` .ng-valid[required], .ng-valid.required { border-left: 5px solid #42A948; /* green */ } .ng-invalid:not(form) { border-left: 5px solid #a94442; /* red */ } .alert div { background-color: #fed3d3; color: #820000; padding: 1rem; margin-bottom: 1rem; } .form-group { margin-bottom: 1rem; } label { display: block; margin-bottom: .5rem; } select { width: 100%; padding: .5rem; } ``` Cross-field validation ---------------------- A cross-field validator is a [custom validator](form-validation#custom-validators "Read about custom validators") that compares the values of different fields in a form and accepts or rejects them in combination. For example, you might have a form that offers mutually incompatible options, so that if the user can choose A or B, but not both. Some field values might also depend on others; a user might be allowed to choose B only if A is also chosen. The following cross validation examples show how to do the following: * Validate reactive or template-based form input based on the values of two sibling controls, * Show a descriptive error message after the user interacted with the form and the validation failed. The examples use cross-validation to ensure that heroes do not reveal their true identities by filling out the Hero Form. The validators do this by checking that the hero names and alter egos do not match. ### Adding cross-validation to reactive forms The form has the following structure: ``` const heroForm = new FormGroup({ 'name': new FormControl(), 'alterEgo': new FormControl(), 'power': new FormControl() }); ``` Notice that the `name` and `alterEgo` are sibling controls. To evaluate both controls in a single custom validator, you must perform the validation in a common ancestor control: the `[FormGroup](../api/forms/formgroup)`. You query the `[FormGroup](../api/forms/formgroup)` for its child controls so that you can compare their values. To add a validator to the `[FormGroup](../api/forms/formgroup)`, pass the new validator in as the second argument on creation. ``` const heroForm = new FormGroup({ 'name': new FormControl(), 'alterEgo': new FormControl(), 'power': new FormControl() }, { validators: identityRevealedValidator }); ``` The validator code is as follows. ``` /** A hero's name can't match the hero's alter ego */ export const identityRevealedValidator: ValidatorFn = (control: AbstractControl): ValidationErrors | null => { const name = control.get('name'); const alterEgo = control.get('alterEgo'); return name && alterEgo && name.value === alterEgo.value ? { identityRevealed: true } : null; }; ``` The `identity` validator implements the `[ValidatorFn](../api/forms/validatorfn)` interface. It takes an Angular control object as an argument and returns either null if the form is valid, or `[ValidationErrors](../api/forms/validationerrors)` otherwise. The validator retrieves the child controls by calling the `[FormGroup](../api/forms/formgroup)`'s [get](../api/forms/abstractcontrol#get) method, then compares the values of the `name` and `alterEgo` controls. If the values do not match, the hero's identity remains secret, both are valid, and the validator returns null. If they do match, the hero's identity is revealed and the validator must mark the form as invalid by returning an error object. To provide better user experience, the template shows an appropriate error message when the form is invalid. ``` <div *ngIf="heroForm.errors?.['identityRevealed'] && (heroForm.touched || heroForm.dirty)" class="cross-validation-error-message alert alert-danger"> Name cannot match alter ego. </div> ``` This `*[ngIf](../api/common/ngif)` displays the error if the `[FormGroup](../api/forms/formgroup)` has the cross validation error returned by the `identityRevealed` validator, but only if the user finished [interacting with the form](form-validation#dirty-or-touched). ### Adding cross-validation to template-driven forms For a template-driven form, you must create a directive to wrap the validator function. You provide that directive as the validator using the [`NG_VALIDATORS` token](form-validation#adding-to-template-driven-forms "Read about providing validators"), as shown in the following example. ``` @Directive({ selector: '[appIdentityRevealed]', providers: [{ provide: NG_VALIDATORS, useExisting: IdentityRevealedValidatorDirective, multi: true }] }) export class IdentityRevealedValidatorDirective implements Validator { validate(control: AbstractControl): ValidationErrors | null { return identityRevealedValidator(control); } } ``` You must add the new directive to the HTML template. Because the validator must be registered at the highest level in the form, the following template puts the directive on the `form` tag. ``` <form #heroForm="ngForm" appIdentityRevealed> ``` To provide better user experience, an appropriate error message appears when the form is invalid. ``` <div *ngIf="heroForm.errors?.['identityRevealed'] && (heroForm.touched || heroForm.dirty)" class="cross-validation-error-message alert"> Name cannot match alter ego. </div> ``` This is the same in both template-driven and reactive forms. Creating asynchronous validators -------------------------------- Asynchronous validators implement the `[AsyncValidatorFn](../api/forms/asyncvalidatorfn)` and `[AsyncValidator](../api/forms/asyncvalidator)` interfaces. These are very similar to their synchronous counterparts, with the following differences. * The `validate()` functions must return a Promise or an observable, * The observable returned must be finite, meaning it must complete at some point. To convert an infinite observable into a finite one, pipe the observable through a filtering operator such as `first`, `last`, `take`, or `takeUntil`. Asynchronous validation happens after the synchronous validation, and is performed only if the synchronous validation is successful. This check lets forms avoid potentially expensive async validation processes (such as an HTTP request) if the more basic validation methods have already found invalid input. After asynchronous validation begins, the form control enters a `pending` state. Inspect the control's `pending` property and use it to give visual feedback about the ongoing validation operation. A common UI pattern is to show a spinner while the async validation is being performed. The following example shows how to achieve this in a template-driven form. ``` <input [(ngModel)]="name" #model="ngModel" appSomeAsyncValidator> <app-spinner *ngIf="model.pending"></app-spinner> ``` ### Implementing a custom async validator In the following example, an async validator ensures that heroes pick an alter ego that is not already taken. New heroes are constantly enlisting and old heroes are leaving the service, so the list of available alter egos cannot be retrieved ahead of time. To validate the potential alter ego entry, the validator must initiate an asynchronous operation to consult a central database of all currently enlisted heroes. The following code creates the validator class, `UniqueAlterEgoValidator`, which implements the `[AsyncValidator](../api/forms/asyncvalidator)` interface. ``` @Injectable({ providedIn: 'root' }) export class UniqueAlterEgoValidator implements AsyncValidator { constructor(private heroesService: HeroesService) {} validate( control: AbstractControl ): Observable<ValidationErrors | null> { return this.heroesService.isAlterEgoTaken(control.value).pipe( map(isTaken => (isTaken ? { uniqueAlterEgo: true } : null)), catchError(() => of(null)) ); } } ``` The constructor injects the `HeroesService`, which defines the following interface. ``` interface HeroesService { isAlterEgoTaken: (alterEgo: string) => Observable<boolean>; } ``` In a real world application, the `HeroesService` would be responsible for making an HTTP request to the hero database to check if the alter ego is available. From the validator's point of view, the actual implementation of the service is not important, so the example can just code against the `HeroesService` interface. As the validation begins, the `UniqueAlterEgoValidator` delegates to the `HeroesService` `isAlterEgoTaken()` method with the current control value. At this point the control is marked as `pending` and remains in this state until the observable chain returned from the `validate()` method completes. The `isAlterEgoTaken()` method dispatches an HTTP request that checks if the alter ego is available, and returns `Observable<boolean>` as the result. The `validate()` method pipes the response through the `map` operator and transforms it into a validation result. The method then, like any validator, returns `null` if the form is valid, and `[ValidationErrors](../api/forms/validationerrors)` if it is not. This validator handles any potential errors with the `catchError` operator. In this case, the validator treats the `isAlterEgoTaken()` error as a successful validation, because failure to make a validation request does not necessarily mean that the alter ego is invalid. You could handle the error differently and return the `ValidationError` object instead. After some time passes, the observable chain completes and the asynchronous validation is done. The `pending` flag is set to `false`, and the form validity is updated. ### Adding async validators to reactive forms To use an async validator in reactive forms, begin by injecting the validator into the constructor of the component class. ``` constructor(private alterEgoValidator: UniqueAlterEgoValidator) {} ``` Then, pass the validator function directly to the `[FormControl](../api/forms/formcontrol)` to apply it. In the following example, the `validate` function of `UniqueAlterEgoValidator` is applied to `alterEgoControl` by passing it to the control's `asyncValidators` option and binding it to the instance of `UniqueAlterEgoValidator` that was injected into `HeroFormReactiveComponent`. The value of `asyncValidators` can be either a single async validator function, or an array of functions. To learn more about `[FormControl](../api/forms/formcontrol)` options, see the [AbstractControlOptions](../api/forms/abstractcontroloptions) API reference. ``` const alterEgoControl = new FormControl('', { asyncValidators: [this.alterEgoValidator.validate.bind(this.alterEgoValidator)], updateOn: 'blur' }); ``` ### Adding async validators to template-driven forms To use an async validator in template-driven forms, create a new directive and register the `[NG\_ASYNC\_VALIDATORS](../api/forms/ng_async_validators)` provider on it. In the example below, the directive injects the `UniqueAlterEgoValidator` class that contains the actual validation logic and invokes it in the `validate` function, triggered by Angular when validation should happen. ``` @Directive({ selector: '[appUniqueAlterEgo]', providers: [ { provide: NG_ASYNC_VALIDATORS, useExisting: forwardRef(() => UniqueAlterEgoValidatorDirective), multi: true } ] }) export class UniqueAlterEgoValidatorDirective implements AsyncValidator { constructor(private validator: UniqueAlterEgoValidator) {} validate( control: AbstractControl ): Observable<ValidationErrors | null> { return this.validator.validate(control); } } ``` Then, as with synchronous validators, add the directive's selector to an input to activate it. ``` <input type="text" id="alterEgo" name="alterEgo" #alterEgo="ngModel" [(ngModel)]="hero.alterEgo" [ngModelOptions]="{ updateOn: 'blur' }" appUniqueAlterEgo> ``` ### Optimizing performance of async validators By default, all validators run after every form value change. With synchronous validators, this does not normally have a noticeable impact on application performance. Async validators, however, commonly perform some kind of HTTP request to validate the control. Dispatching an HTTP request after every keystroke could put a strain on the backend API, and should be avoided if possible. You can delay updating the form validity by changing the `updateOn` property from `change` (default) to `submit` or `blur`. With template-driven forms, set the property in the template. ``` <input [(ngModel)]="name" [ngModelOptions]="{updateOn: 'blur'}"> ``` With reactive forms, set the property in the `[FormControl](../api/forms/formcontrol)` instance. ``` new FormControl('', {updateOn: 'blur'}); ``` Interaction with native HTML form validation -------------------------------------------- By default, Angular disables [native HTML form validation](https://developer.mozilla.org/docs/Web/Guide/HTML/Constraint_validation) by adding the `novalidate` attribute on the enclosing `<form>` and uses directives to match these attributes with validator functions in the framework. If you want to use native validation **in combination** with Angular-based validation, you can re-enable it with the `ngNativeValidate` directive. See the [API docs](../api/forms/ngform#native-dom-validation-ui) for details. Last reviewed on Mon Feb 28 2022
programming_docs
angular Practical observable usage Practical observable usage ========================== Here are some examples of domains in which observables are particularly useful. Type-ahead suggestions ---------------------- Observables can simplify the implementation of type-ahead suggestions. Typically, a type-ahead has to do a series of separate tasks: * Listen for data from an input * Trim the value (remove whitespace) and make sure it's a minimum length * Debounce (so as not to send off API requests for every keystroke, but instead wait for a break in keystrokes) * Don't send a request if the value stays the same (rapidly hit a character, then backspace, for instance) * Cancel ongoing AJAX requests if their results will be invalidated by the updated results Writing this in full JavaScript can be quite involved. With observables, you can use a simple series of RxJS operators: ``` import { fromEvent, Observable } from 'rxjs'; import { ajax } from 'rxjs/ajax'; import { debounceTime, distinctUntilChanged, filter, map, switchMap } from 'rxjs/operators'; const searchBox = document.getElementById('search-box') as HTMLInputElement; const typeahead = fromEvent(searchBox, 'input').pipe( map(e => (e.target as HTMLInputElement).value), filter(text => text.length > 2), debounceTime(10), distinctUntilChanged(), switchMap(searchTerm => ajax(`/api/endpoint?search=${searchTerm}`)) ); typeahead.subscribe(data => { // Handle the data from the API }); ``` Exponential backoff ------------------- Exponential backoff is a technique in which you retry an API after failure, making the time in between retries longer after each consecutive failure, with a maximum number of retries after which the request is considered to have failed. This can be quite complex to implement with promises and other methods of tracking AJAX calls. With observables, it is very easy: ``` import { of, pipe, range, throwError, timer, zip } from 'rxjs'; import { ajax } from 'rxjs/ajax'; import { map, mergeMap, retryWhen } from 'rxjs/operators'; export function backoff(maxTries: number, delay: number) { return pipe( retryWhen(attempts => zip(range(1, maxTries + 1), attempts).pipe( mergeMap(([i, err]) => (i > maxTries) ? throwError(err) : of(i)), map(i => i * i), mergeMap(v => timer(v * delay)), ), ), ); } ajax('/api/endpoint') .pipe(backoff(3, 250)) .subscribe(function handleData(data) { /* ... */ }); ``` Last reviewed on Mon Feb 28 2022 angular Attribute binding Attribute binding ================= Attribute binding in Angular helps you set values for attributes directly. With attribute binding, you can improve accessibility, style your application dynamically, and manage multiple CSS classes or styles simultaneously. > See the live example for a working example containing the code snippets in this guide. > > Prerequisites ------------- * [Property Binding](property-binding) Syntax ------ Attribute binding syntax resembles [property binding](property-binding), but instead of an element property between brackets, you precede the name of the attribute with the prefix `attr`, followed by a dot. Then, you set the attribute value with an expression that resolves to a string. ``` <p [attr.attribute-you-are-targeting]="expression"></p> ``` > When the expression resolves to `null` or `undefined`, Angular removes the attribute altogether. > > Binding ARIA attributes ----------------------- One of the primary use cases for attribute binding is to set ARIA attributes. To bind to an ARIA attribute, type the following: ``` <!-- create and set an aria attribute for assistive technology --> <button type="button" [attr.aria-label]="actionName">{{actionName}} with Aria</button> ``` Binding to `colspan` -------------------- Another common use case for attribute binding is with the `colspan` attribute in tables. Binding to the `colspan` attribute helps you to keep your tables programmatically dynamic. Depending on the amount of data that your application populates a table with, the number of columns that a row spans could change. To use attribute binding with the `<td>` attribute `colspan` 1. Specify the `colspan` attribute by using the following syntax: `[attr.colspan]`. 2. Set `[attr.colspan]` equal to an expression. In the following example, you bind the `colspan` attribute to the expression `1 + 1`. ``` <!-- expression calculates colspan=2 --> <tr><td [attr.colspan]="1 + 1">One-Two</td></tr> ``` This binding causes the `<tr>` to span two columns. > Sometimes there are differences between the name of property and an attribute. > > `colspan` is an attribute of `<td>`, while `colSpan` with a capital "S" is a property. When using attribute binding, use `colspan` with a lowercase "s". > > For more information on how to bind to the `colSpan` property, see the [`colspan` and `colSpan`](property-binding#colspan) section of [Property Binding](property-binding). > > What’s next ----------- * [Class & Style Binding](class-binding) Last reviewed on Mon May 02 2022 angular Authoring schematics Authoring schematics ==================== You can create your own schematics to operate on Angular projects. Library developers typically package schematics with their libraries to integrate them with the Angular CLI. You can also create stand-alone schematics to manipulate the files and constructs in Angular applications as a way of customizing them for your development environment and making them conform to your standards and constraints. Schematics can be chained, running other schematics to perform complex operations. Manipulating the code in an application has the potential to be both very powerful and correspondingly dangerous. For example, creating a file that already exists would be an error, and if it was applied immediately, it would discard all the other changes applied so far. The Angular Schematics tooling guards against side effects and errors by creating a virtual file system. A schematic describes a pipeline of transformations that can be applied to the virtual file system. When a schematic runs, the transformations are recorded in memory, and only applied in the real file system once they're confirmed to be valid. Schematics concepts ------------------- The public API for schematics defines classes that represent the basic concepts. * The virtual file system is represented by a `Tree`. The `Tree` data structure contains a *base* (a set of files that already exists) and a *staging area* (a list of changes to be applied to the base). When making modifications, you don't actually change the base, but add those modifications to the staging area. * A `Rule` object defines a function that takes a `Tree`, applies transformations, and returns a new `Tree`. The main file for a schematic, `index.ts`, defines a set of rules that implement the schematic's logic. * A transformation is represented by an `Action`. There are four action types: `Create`, `Rename`, `Overwrite`, and `Delete`. * Each schematic runs in a context, represented by a `SchematicContext` object. The context object passed into a rule provides access to utility functions and metadata that the schematic might need to work with, including a logging API to help with debugging. The context also defines a *merge strategy* that determines how changes are merged from the staged tree into the base tree. A change can be accepted or ignored, or throw an exception. ### Defining rules and actions When you create a new blank schematic with the [Schematics CLI](schematics-authoring#cli), the generated entry function is a *rule factory*. A `RuleFactory` object defines a higher-order function that creates a `Rule`. ``` import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; // You don't have to export the function as default. // You can also have more than one rule factory per file. export function helloWorld(_options: any): Rule { return (tree: Tree, _context: SchematicContext) => { return tree; }; } ``` Your rules can make changes to your projects by calling external tools and implementing logic. You need a rule, for example, to define how a template in the schematic is to be merged into the hosting project. Rules can make use of utilities provided with the `@schematics/angular` package. Look for helper functions for working with modules, dependencies, TypeScript, AST, JSON, Angular CLI workspaces and projects, and more. ``` import { JsonAstObject, JsonObject, JsonValue, Path, normalize, parseJsonAst, strings, } from '@angular-devkit/core'; ``` ### Defining input options with a schema and interfaces Rules can collect option values from the caller and inject them into templates. The options available to your rules, with their allowed values and defaults, are defined in the schematic's JSON schema file, `<schematic>/schema.json`. Define variable or enumerated data types for the schema using TypeScript interfaces. The schema defines the types and default values of variables used in the schematic. For example, the hypothetical "Hello World" schematic might have the following schema. ``` { "properties": { "name": { "type": "string", "minLength": 1, "default": "world" }, "useColor": { "type": "boolean" } } } ``` See examples of schema files for the Angular CLI command schematics in [`@schematics/angular`](https://github.com/angular/angular-cli/blob/main/packages/schematics/angular/application/schema.json). ### Schematic prompts Schematic *prompts* introduce user interaction into schematic execution. Configure schematic options to display a customizable question to the user. The prompts are displayed before the execution of the schematic, which then uses the response as the value for the option. This lets users direct the operation of the schematic without requiring in-depth knowledge of the full spectrum of available options. The "Hello World" schematic might, for example, ask the user for their name, and display that name in place of the default name "world". To define such a prompt, add an `x-prompt` property to the schema for the `name` variable. Similarly, you can add a prompt to let the user decide whether the schematic uses color when executing its hello action. The schema with both prompts would be as follows. ``` { "properties": { "name": { "type": "string", "minLength": 1, "default": "world", "x-prompt": "What is your name?" }, "useColor": { "type": "boolean", "x-prompt": "Would you like the response in color?" } } } ``` #### Prompt short-form syntax These examples use a shorthand form of the prompt syntax, supplying only the text of the question. In most cases, this is all that is required. Notice however, that the two prompts expect different types of input. When using the shorthand form, the most appropriate type is automatically selected based on the property's schema. In the example, the `name` prompt uses the `input` type because it is a string property. The `useColor` prompt uses a `confirmation` type because it is a Boolean property. In this case, "yes" corresponds to `true` and "no" corresponds to `false`. There are three supported input types. | Input type | Details | | --- | --- | | confirmation | A yes or no question; ideal for Boolean options. | | input | Textual input; ideal for string or number options. | | list | A predefined set of allowed values. | In the short form, the type is inferred from the property's type and constraints. | Property schema | Prompt type | | --- | --- | | "type": "boolean" | confirmation ("yes"=`true`, "no"=`false`) | | "type": "string" | input | | "type": "number" | input (only valid numbers accepted) | | "type": "integer" | input (only valid numbers accepted) | | "enum": […] | list (enum members become list selections) | In the following example, the property takes an enumerated value, so the schematic automatically chooses the list type, and creates a menu from the possible values. ``` "style": { "description": "The file extension or preprocessor to use for style files.", "type": "string", "default": "css", "enum": [ "css", "scss", "sass", "less", "styl" ], "x-prompt": "Which stylesheet format would you like to use?" } ``` The prompt runtime automatically validates the provided response against the constraints provided in the JSON schema. If the value is not acceptable, the user is prompted for a new value. This ensures that any values passed to the schematic meet the expectations of the schematic's implementation, so that you do not need to add additional checks within the schematic's code. #### Prompt long-form syntax The `x-prompt` field syntax supports a long form for cases where you require additional customization and control over the prompt. In this form, the `x-prompt` field value is a JSON object with subfields that customize the behavior of the prompt. | Field | Data value | | --- | --- | | type | `confirmation`, `input`, or `list` (selected automatically in short form) | | message | string (required) | | items | string and/or label/value object pair (only valid with type `list`) | The following example of the long form is from the JSON schema for the schematic that the CLI uses to [generate applications](https://github.com/angular/angular-cli/blob/ba8a6ea59983bb52a6f1e66d105c5a77517f062e/packages/schematics/angular/application/schema.json#L56). It defines the prompt that lets users choose which style preprocessor they want to use for the application being created. By using the long form, the schematic can provide more explicit formatting of the menu choices. ``` "style": { "description": "The file extension or preprocessor to use for style files.", "type": "string", "default": "css", "enum": [ "css", "scss", "sass", "less" ], "x-prompt": { "message": "Which stylesheet format would you like to use?", "type": "list", "items": [ { "value": "css", "label": "CSS" }, { "value": "scss", "label": "SCSS [ https://sass-lang.com/documentation/syntax#scss ]" }, { "value": "sass", "label": "Sass [ https://sass-lang.com/documentation/syntax#the-indented-syntax ]" }, { "value": "less", "label": "Less [ http://lesscss.org/ ]" } ] }, }, ``` #### x-prompt schema The JSON schema that defines a schematic's options supports extensions to allow the declarative definition of prompts and their respective behavior. No additional logic or changes are required to the code of a schematic to support the prompts. The following JSON schema is a complete description of the long-form syntax for the `x-prompt` field. ``` { "oneOf": [ { "type": "string" }, { "type": "object", "properties": { "type": { "type": "string" }, "message": { "type": "string" }, "items": { "type": "array", "items": { "oneOf": [ { "type": "string" }, { "type": "object", "properties": { "label": { "type": "string" }, "value": { } }, "required": [ "value" ] } ] } } }, "required": [ "message" ] } ] } ``` Schematics CLI -------------- Schematics come with their own command-line tool. Using Node 6.9 or later, install the Schematics command line tool globally: ``` npm install -g @angular-devkit/schematics-cli ``` This installs the `schematics` executable, which you can use to create a new schematics collection in its own project folder, add a new schematic to an existing collection, or extend an existing schematic. In the following sections, you will create a new schematics collection using the CLI to introduce the files and file structure, and some of the basic concepts. The most common use of schematics, however, is to integrate an Angular library with the Angular CLI. Do this by creating the schematic files directly within the library project in an Angular workspace, without using the Schematics CLI. See [Schematics for Libraries](schematics-for-libraries). ### Creating a schematics collection The following command creates a new schematic named `hello-world` in a new project folder of the same name. ``` schematics blank --name=hello-world ``` The `blank` schematic is provided by the Schematics CLI. The command creates a new project folder (the root folder for the collection) and an initial named schematic in the collection. Go to the collection folder, install your npm dependencies, and open your new collection in your favorite editor to see the generated files. For example, if you are using VS Code: ``` cd hello-world npm install npm run build code . ``` The initial schematic gets the same name as the project folder, and is generated in `src/hello-world`. Add related schematics to this collection, and modify the generated skeleton code to define your schematic's functionality. Each schematic name must be unique within the collection. ### Running a schematic Use the `schematics` command to run a named schematic. Provide the path to the project folder, the schematic name, and any mandatory options, in the following format. ``` schematics <path-to-schematics-project>:<schematics-name> --<required-option>=<value> ``` The path can be absolute or relative to the current working directory where the command is executed. For example, to run the schematic you just generated (which has no required options), use the following command. ``` schematics .:hello-world ``` ### Adding a schematic to a collection To add a schematic to an existing collection, use the same command you use to start a new schematics project, but run the command inside the project folder. ``` cd hello-world schematics blank --name=goodbye-world ``` The command generates the new named schematic inside your collection, with a main `index.ts` file and its associated test spec. It also adds the name, description, and factory function for the new schematic to the collection's schema in the `collection.json` file. Collection contents ------------------- The top level of the root project folder for a collection contains configuration files, a `node_modules` folder, and a `src/` folder. The `src/` folder contains subfolders for named schematics in the collection, and a schema, `collection.json`, which describes the collected schematics. Each schematic is created with a name, description, and factory function. ``` { "$schema": "../node_modules/@angular-devkit/schematics/collection-schema.json", "schematics": { "hello-world": { "description": "A blank schematic.", "factory": "./hello-world/index#helloWorld" } } } ``` * The `$schema` property specifies the schema that the CLI uses for validation. * The `schematics` property lists named schematics that belong to this collection. Each schematic has a plain-text description, and points to the generated entry function in the main file. * The `factory` property points to the generated entry function. In this example, you invoke the `hello-world` schematic by calling the `helloWorld()` factory function. * The optional `schema` property points to a JSON schema file that defines the command-line options available to the schematic. * The optional `aliases` array specifies one or more strings that can be used to invoke the schematic. For example, the schematic for the Angular CLI "generate" command has an alias "g", that lets you use the command `ng g`. ### Named schematics When you use the Schematics CLI to create a blank schematics project, the new blank schematic is the first member of the collection, and has the same name as the collection. When you add a new named schematic to this collection, it is automatically added to the `collection.json` schema. In addition to the name and description, each schematic has a `factory` property that identifies the schematic's entry point. In the example, you invoke the schematic's defined functionality by calling the `helloWorld()` function in the main file, `hello-world/index.ts`. Each named schematic in the collection has the following main parts. | Parts | Details | | --- | --- | | `index.ts` | Code that defines the transformation logic for a named schematic. | | `schema.json` | Schematic variable definition. | | `schema.d.ts` | Schematic variables. | | `files/` | Optional component/template files to replicate. | It is possible for a schematic to provide all of its logic in the `index.ts` file, without additional templates. You can create dynamic schematics for Angular, however, by providing components and templates in the `files` folder, like those in standalone Angular projects. The logic in the index file configures these templates by defining rules that inject data and modify variables. Last reviewed on Mon Feb 28 2022
programming_docs
angular Feature modules Feature modules =============== Feature modules are NgModules for the purpose of organizing code. For the final sample application with a feature module that this page describes, see the live example. As your application grows, you can organize code relevant for a specific feature. This helps apply clear boundaries for features. With feature modules, you can keep code related to a specific functionality or feature separate from other code. Delineating areas of your application helps with collaboration between developers and teams, separating directives, and managing the size of the root module. Feature modules vs. root modules -------------------------------- A feature module is an organizational best practice, as opposed to a concept of the core Angular API. A feature module delivers a cohesive set of functionality focused on a specific application need such as a user workflow, routing, or forms. While you can do everything within the root module, feature modules help you partition the application into focused areas. A feature module collaborates with the root module and with other modules through the services it provides and the components, directives, and pipes that it shares. How to make a feature module ---------------------------- Assuming you already have an application that you created with the [Angular CLI](cli), create a feature module using the CLI by entering the following command in the root project directory. Replace `CustomerDashboard` with the name of your module. You can omit the "Module" suffix from the name because the CLI appends it: ``` ng generate module CustomerDashboard ``` This causes the CLI to create a folder called `customer-dashboard` with a file inside called `customer-dashboard.module.ts` with the following contents: ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; @NgModule({ imports: [ CommonModule ], declarations: [] }) export class CustomerDashboardModule { } ``` The structure of an NgModule is the same whether it is a root module or a feature module. In the CLI generated feature module, there are two JavaScript import statements at the top of the file: the first imports `[NgModule](../api/core/ngmodule)`, which, like the root module, lets you use the `@[NgModule](../api/core/ngmodule)` decorator; the second imports `[CommonModule](../api/common/commonmodule)`, which contributes many common directives such as `[ngIf](../api/common/ngif)` and `[ngFor](../api/common/ngfor)`. Feature modules import `[CommonModule](../api/common/commonmodule)` instead of `[BrowserModule](../api/platform-browser/browsermodule)`, which is only imported once in the root module. `[CommonModule](../api/common/commonmodule)` only contains information for common directives such as `[ngIf](../api/common/ngif)` and `[ngFor](../api/common/ngfor)` which are needed in most templates, whereas `[BrowserModule](../api/platform-browser/browsermodule)` configures the Angular application for the browser which needs to be done only once. The `declarations` array is available for you to add declarables, which are components, directives, and pipes that belong exclusively to this particular module. To add a component, enter the following command at the command line where `customer-dashboard` is the directory where the CLI generated the feature module and `CustomerDashboard` is the name of the component: ``` ng generate component customer-dashboard/CustomerDashboard ``` This generates a folder for the new component within the customer-dashboard folder and updates the feature module with the `CustomerDashboardComponent` info: ``` // import the new component import { CustomerDashboardComponent } from './customer-dashboard/customer-dashboard.component'; @NgModule({ imports: [ CommonModule ], declarations: [ CustomerDashboardComponent ], }) ``` The `CustomerDashboardComponent` is now in the JavaScript import list at the top and added to the `declarations` array, which lets Angular know to associate this new component with this feature module. Importing a feature module -------------------------- To incorporate the feature module into your app, you have to let the root module, `app.module.ts`, know about it. Notice the `CustomerDashboardModule` export at the bottom of `customer-dashboard.module.ts`. This exposes it so that other modules can get to it. To import it into the `AppModule`, add it to the imports in `app.module.ts` and to the `imports` array: ``` import { HttpClientModule } from '@angular/common/http'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; // import the feature module here so you can add it to the imports array below import { CustomerDashboardModule } from './customer-dashboard/customer-dashboard.module'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, HttpClientModule, CustomerDashboardModule // add the feature module here ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` Now the `AppModule` knows about the feature module. If you were to add any service providers to the feature module, `AppModule` would know about those too, as would any other feature modules. However, NgModules don't expose their components by default. Rendering a feature module's component template ----------------------------------------------- When the CLI generated the `CustomerDashboardComponent` for the feature module, it included a template, `customer-dashboard.component.html`, with the following markup: ``` <p> customer-dashboard works! </p> ``` To see this HTML in the `AppComponent`, you first have to export the `CustomerDashboardComponent` in the `CustomerDashboardModule`. In `customer-dashboard.module.ts`, just beneath the `declarations` array, add an `exports` array containing `CustomerDashboardComponent`: ``` exports: [ CustomerDashboardComponent ] ``` Next, in the `AppComponent`, `app.component.html`, add the tag `<app-customer-dashboard>`: ``` <h1> {{title}} </h1> <!-- add the selector from the CustomerDashboardComponent --> <app-customer-dashboard></app-customer-dashboard> ``` Now, in addition to the title that renders by default, the `CustomerDashboardComponent` template renders too: More on NgModules ----------------- You may also be interested in the following: * [Lazy Loading Modules with the Angular Router](lazy-loading-ngmodules) * [Providers](providers) * [Types of Feature Modules](module-types) Last reviewed on Mon Feb 28 2022 angular Getting started with service workers Getting started with service workers ==================================== This document explains how to enable Angular service worker support in projects that you created with the [Angular CLI](cli). It then uses an example to show you a service worker in action, demonstrating loading and basic caching. Prerequisites ------------- A basic understanding of the information in [Introduction to Angular service workers](service-worker-intro). Adding a service worker to your project --------------------------------------- To set up the Angular service worker in your project, use the CLI command `ng add @angular/pwa`. It takes care of configuring your application to use service workers by adding the `@angular/service-worker` package along with setting up the necessary support files. ``` ng add @angular/pwa --project <project-name> ``` The preceding command completes the following actions: 1. Adds the `@angular/service-worker` package to your project. 2. Enables service worker build support in the CLI. 3. Imports and registers the service worker in the application module. 4. Updates the `index.html` file: * Includes a link to add the `manifest.webmanifest` file * Adds a meta tag for `theme-color` 5. Installs icon files to support the installed Progressive Web App (PWA). 6. Creates the service worker configuration file called [`ngsw-config.json`](service-worker-config), which specifies the caching behaviors and other settings. Now, build the project: ``` ng build ``` The CLI project is now set up to use the Angular service worker. Service worker in action: a tour -------------------------------- This section demonstrates a service worker in action, using an example application. ### Initial load With the server running, point your browser at `http://localhost:8080`. Your application should load normally. > **TIP**: When testing Angular service workers, it's a good idea to use an incognito or private window in your browser to ensure the service worker doesn't end up reading from a previous leftover state, which can cause unexpected behavior. > > > **NOTE**: If you are not using HTTPS, the service worker will only be registered when accessing the application on `localhost`. > > ### Simulating a network issue To simulate a network issue, disable network interaction for your application. In Chrome: 1. Select **Tools** > **Developer Tools** (from the Chrome menu located in the top right corner). 2. Go to the **Network tab**. 3. Select **Offline** in the **Throttling** dropdown menu. Now the application has no access to network interaction. For applications that do not use the Angular service worker, refreshing now would display Chrome's Internet disconnected page that says "There is no Internet connection". With the addition of an Angular service worker, the application behavior changes. On a refresh, the page loads normally. Look at the Network tab to verify that the service worker is active. > **NOTE**: Under the "Size" column, the requests state is `(ServiceWorker)`. This means that the resources are not being loaded from the network. Instead, they are being loaded from the service worker's cache. > > ### What's being cached? Notice that all of the files the browser needs to render this application are cached. The `ngsw-config.json` boilerplate configuration is set up to cache the specific resources used by the CLI: * `index.html` * `favicon.ico` * Build artifacts (JS and CSS bundles) * Anything under `assets` * Images and fonts directly under the configured `outputPath` (by default `./dist/<project-name>/`) or `resourcesOutputPath`. See [`ng build`](cli/build) for more information about these options. > Pay attention to two key points: > > 1. The generated `ngsw-config.json` includes a limited list of cacheable fonts and images extensions. In some cases, you might want to modify the glob pattern to suit your needs. > 2. If `resourcesOutputPath` or `assets` paths are modified after the generation of configuration file, you need to change the paths manually in `ngsw-config.json`. > > ### Making changes to your application Now that you've seen how service workers cache your application, the next step is understanding how updates work. Make a change to the application, and watch the service worker install the update: 1. If you're testing in an incognito window, open a second blank tab. This keeps the incognito and the cache state alive during your test. 2. Close the application tab, but not the window. This should also close the Developer Tools. 3. Shut down `http-server`. 4. Open `src/app/app.component.html` for editing. 5. Change the text `Welcome to {{title}}!` to `Bienvenue à {{title}}!`. 6. Build and run the server again: ``` ng build http-server -p 8080 -c-1 dist/<project-name> ``` ### Updating your application in the browser Now look at how the browser and service worker handle the updated application. 1. Open <http://localhost:8080> again in the same window. What happens? What went wrong? Nothing, actually. The Angular service worker is doing its job and serving the version of the application that it has **installed**, even though there is an update available. In the interest of speed, the service worker doesn't wait to check for updates before it serves the application that it has cached. Look at the `http-server` logs to see the service worker requesting `/ngsw.json`. This is how the service worker checks for updates. 2. Refresh the page. The service worker installed the updated version of your application *in the background*, and the next time the page is loaded or reloaded, the service worker switches to the latest version. More on Angular service workers ------------------------------- You might also be interested in the following: * [App Shell](app-shell) * [Communicating with service workers](service-worker-communications) Last reviewed on Mon Feb 28 2022 angular Resolve documentation linter messages Resolve documentation linter messages ===================================== This topic describes different ways to resolve common messages that the documentation linter produces. Anatomy of a documentation linter message ----------------------------------------- This is an example of a message produced by the documentation linter. A documentation linter message contains these elements. Starting from the top line: * The severity. One of these icons indicates the severity of the message: + **Error** (A red `x` in a circle) Errors must be corrected before the file can be merged. + **Warning** (A yellow exclamation mark in a triangle) Warnings should be corrected before the file is merged. + **Info** (A blue lower-case `i` in a circle) Informational messages should be corrected before the file is merged. * The style rule message. The style rule message in this example is: ``` Did you really mean 'sdfdsfsdfdfssd'? It wasn't found in our dictionary. ``` * The style reference. Some references are linked to a style guide topic that explains the rule. The style reference in this example is: ``` Vale(Angular.Angular_Spelling) ``` * The location of the problem text in the document identified by source line and column as precisely as possible. Some messages might not have the exact location of the text that triggered the message. The location in this example is: ``` [Ln 8, Col 1] ``` * The style test definition file that produced the message, which is linked to the file. The style test definition in this example is: ``` Angular_Spelling.yml[Ln 1, Col 1]: View rule ``` Strategies to improve your documentation ---------------------------------------- These tips can help you improve your documentation and remove documentation linter messages. ### Refer to the style guides The lint tool tests against the styles found in these style guides. Most style tests include links to relevant sections in these documents for more information. * [Angular documentation style guide](docs-style-guide "Angular documentation style guide | Angular") * [Google Developer Documentation Style Guide](https://developers.google.com/style "About this guide | Google developer documentation style guide | Google Developers") > Not every style mentioned in the style guides has a test. Style guides and the style tests can change. > > ### Split up long sentences Generally, shorter sentences are easier to read than longer ones. Long sentences can occur when you try to say too much at once. Long sentences, as well as the use of parentheses, semicolons, or words identified as `too-wordy`, generally require rethinking and rewriting. Consider restructuring a long sentence to break its individual ideas into distinct sentences or bullet points. ### Use lists and tables Sentences that contain comma-separated lists might be clearer if presented as a bulleted-list or table. Consider changing a comma-separated list of items in a sentence to a list of bullets to make those list items easier to read. ### Use more common words Shorter, more common words are generally easier to read than longer ones. This does not mean you need to write down to the audience. Technical docs should still be precise. Angular docs are read by many people around the world and should use language that the most people can understand. If you think a specific term is required even though it has been flagged as uncommon, try to include a short explanation of the term. Also, try adding some context around its first mention. Linking a term to another section or topic is also an option, but consider the disruption that causes to the reader before you use it. If you force a reader to go to another page for a definition, they might lose their concentration on the current topic and their primary goal. ### Use fewer words If you can remove a word and not lose the meaning of the sentence, leave it out. One common place where removing words can help is in a list of examples with more than two or three items. Before you place the items in a bullet list, consider if only one of the items can convey the desired meaning. Another option might be to replace a list of items with a single term that describes all the elements in your list. More about specific documentation linter messages ------------------------------------------------- Most documentation linter messages are self-explanatory and include a link to supplementary documentation. Some messages identify areas in that the documentation might need more thought. The following types of messages often occur in areas of the text that should be reconsidered and rewritten to improve the text and remove the message. ### A word is `too-wordy` or should be replaced by another Generally, technical documentation should use a simple and consistent vocabulary to be understood by a wide audience. Words that trigger this message are usually words for which there's a simpler way to convey the same thought. #### Angular.WriteGood\_TooWordy - See if you can rewrite the sentence... Words identified by this style test can usually be replaced by simpler words. If not, sentences with these words should be revised to use simpler language and avoid the word in the message. The following table has some common words detected by this type of message and simpler words to try in their place. | `Too-wordy` word | Simpler replacement | | --- | --- | | `accelerate` | `speed up` | | `accomplish` | `perform` or `finish` | | `acquire` | `get` | | `additional` | `more` | | `adjustment` | `change` | | `advantageous` | `beneficial` | | `consequently` | `as a result` | | `designate` | `assign` | | `equivalent` | `the same` | | `exclusively` | `only` | | `for the most part` | `generally` | | `have a tendency to` | `tend to` | | `in addition` | `furthermore` | | `modify` | `change` or `update` | | `monitor` | `observe` | | `necessitate` | `require` | | `one particular` | `one` | | `point in time` | `moment` | | `portion` | `part` | | `similar to` | `like` | | `validate` | `verify` | | `whether or not` | `whether` | #### `WordList` messages The messages about words detected by these style tests generally suggest a better alternative. While the word you used would probably be understood, it most likely triggered this message for one of the following reasons: * The suggested works better in a screen-reader context * The word that you used could produce an unpleasant response in the reader * The suggested word is simpler, shorter, or easier for more people to understand * The word you used has other possible variations. The suggested word is the variation to use in the documentation to be consistent. ### `Proselint` messages The Proselint style tests test for words that are jargon or that could be offensive to some people. Rewrite the text to replace the jargon or offensive language with more inclusive language. ### `Starting a sentence` messages Some words, such as *so* and *there is/are*, aren't necessary at the beginning of a sentence. Sentences that start with the words identified by this message can usually be made shorter, simpler, and clearer by rewriting them without those openings. ### Cliches Cliches should be replaced by more literal text. Cliches make it difficult for people who don't understand English to understand the documentation. When cliches are translated by online tools such as Google Translate, they can produce confusing results. If all else fails ----------------- The style rules generally guide you in the direction of clearer content, but sometimes you might need to break the rules. If you decide that the best choice for the text conflicts with the linter, mark the text as an exception to linting. The documentation linter checks only the content that is rendered as text. It does not test code-formatted text. One common source of false problems is code references that are not formatted as code. If you use these exceptions, please limit the amount of text that you exclude from analysis to the fewest lines possible. When necessary, you can apply these exceptions to your content. 1. **General exception** A general exception allows you to exclude the specified text from all lint testing. To apply a general exception, surround the text that you do not want the linter to test with the HTML `comment` elements shown in this example. ``` <!-- vale off --> Text the linter does not check for any style problem. <!-- vale on --> ``` Be sure to leave a blank line before and after each comment. 2. **Style exception** A style exception allows you to exclude text from an individual style test. To apply a style exception, surround the text that you do not want the linter to test with these HTML `comment` elements. Between these comments, the linter ignores the style test in the comment, but still tests for all other styles that are in use. ``` <!-- vale Style.Rule = NO --> <!-- vale Style.Rule = YES --> ``` Replace `Style.Rule` in the comments with the style rule reference from the problem message displayed in the IDE. For example, imagine that you got this problem message and you want to use the word it identified as a problem. ``` Did you really mean 'inlines'? It was not found in our dictionary. Vale(Angular.Angular_Spelling) [Ln 24, Col 59] Angular_Spelling.yml[Ln 1, Col 1]: View rule ``` The `Style.Rule` for this message is the text inside the parentheses: `Angular.Angular_Spelling` in this case. To turn off that style test, use the comments shown in this example. ``` <!-- vale Angular.Angular_Spelling = NO --> 'inlines' does not display a problem because this text is not spell-checked. Remember that the linter does not check any spelling in this block of text. The linter continues to test all other style rules. <!-- vale Angular.Angular_Spelling = YES --> ``` Last reviewed on Wed Oct 12 2022
programming_docs
angular Understanding Angular Understanding Angular ===================== To understand the capabilities of the Angular framework, you need to learn about the following: * Components * Templates * Directives * Dependency injection The topics in this section explain these features and concepts, and how you can use them. Prerequisites ------------- To get the most out of these developer guides, you should review the following topics: * [What is Angular](what-is-angular "What is Angular\? | Angular") * [Getting started tutorial](start "Getting started with Angular | Angular") Learn about Angular basics -------------------------- [Components Learn about Angular components. A component is a key building block of Angular development. Components](component-overview "Components") [Templates Learn about how to build an Angular template. Templates](template-syntax "Templates") [Directives Learn about Angular directives. A directive is a class that adds additional behavior to elements in your Angular applications. Directives](built-in-directives "Directives") [Dependency injection Learn about dependency injection. Dependency injection refers to services or objects that a class needs to perform a specific function. Dependency injection](dependency-injection "Dependency injection") Last reviewed on Mon Feb 28 2022 angular Workspace npm dependencies Workspace npm dependencies ========================== The Angular Framework, Angular CLI, and components used by Angular applications are packaged as [npm packages](https://docs.npmjs.com/getting-started/what-is-npm "What is npm?") and distributed using the [npm registry](https://docs.npmjs.com). You can download and install these npm packages by using the [npm CLI client](https://docs.npmjs.com/cli/install), which is installed with and runs as a [Node.js®](https://nodejs.org "Nodejs.org") application. By default, the Angular CLI uses the npm client. Alternatively, you can use the [yarn client](https://yarnpkg.com) for downloading and installing npm packages. > See [Local Environment Setup](setup-local "Setting up for Local Development") for information about the required versions and installation of `Node.js` and `npm`. > > If you already have projects running on your machine that use other versions of Node.js and npm, consider using [nvm](https://github.com/creationix/nvm) to manage the multiple versions of Node.js and npm. > > `package.json` -------------- Both `npm` and `yarn` install the packages that are identified in a [`package.json`](https://docs.npmjs.com/files/package.json) file. The CLI command `ng new` creates a `package.json` file when it creates the new workspace. This `package.json` is used by all projects in the workspace, including the initial application project that is created by the CLI when it creates the workspace. Initially, this `package.json` includes *a starter set of packages*, some of which are required by Angular and others that support common application scenarios. You add packages to `package.json` as your application evolves. You may even remove some. The `package.json` is organized into two groups of packages: | Packages | Details | | --- | --- | | [Dependencies](npm-packages#dependencies) | Essential to *running* applications. | | [DevDependencies](npm-packages#dev-dependencies) | Only necessary to *develop* applications. | > **LIBRARY DEVELOPERS**: By default, the CLI command [`ng generate library`](cli/generate) creates a `package.json` for the new library. That `package.json` is used when publishing the library to npm. For more information, see the CLI wiki page [Library Support](creating-libraries). > > Dependencies ------------ The packages listed in the `dependencies` section of `package.json` are essential to *running* applications. The `dependencies` section of `package.json` contains: | Packages | Details | | --- | --- | | [Angular packages](npm-packages#angular-packages) | Angular core and optional modules; their package names begin `@angular` | | [Support packages](npm-packages#support-packages) | 3rd party libraries that must be present for Angular applications to run | | [Polyfill packages](npm-packages#polyfills) | Polyfills plug gaps in a browser's JavaScript implementation | To add a new dependency, use the [`ng add`](cli/add) command. ### Angular packages The following Angular packages are included as dependencies in the default `package.json` file for a new Angular workspace. For a complete list of Angular packages, see the [API reference](api?type=package). | Package name | Details | | --- | --- | | [`@angular/animations`](../api/animations) | Angular's animations library makes it easy to define and apply animation effects such as page and list transitions. For more information, see the [Animations guide](animations). | | [`@angular/common`](../api/common) | The commonly-needed services, pipes, and directives provided by the Angular team. The [`HttpClientModule`](../api/common/http/httpclientmodule) is also here, in the [`@angular/common/http`](../api/common/http) subfolder. For more information, see the [HttpClient guide](http). | | `@angular/compiler` | Angular's template compiler. It understands templates and can convert them to code that makes the application run and render. Typically you don't interact with the compiler directly; rather, you use it indirectly using `platform-browser-dynamic` when JIT compiling in the browser. For more information, see the [Ahead-of-time Compilation guide](aot-compiler). | | [`@angular/core`](../api/core) | Critical runtime parts of the framework that are needed by every application. Includes all metadata decorators, `[Component](../api/core/component)`, `[Directive](../api/core/directive)`, dependency injection, and the component lifecycle hooks. | | [`@angular/forms`](../api/forms) | Support for both [template-driven](forms) and [reactive forms](reactive-forms). For information about choosing the best forms approach for your app, see [Introduction to forms](forms-overview). | | [`@angular/platform-browser`](../api/platform-browser) | Everything DOM and browser related, especially the pieces that help render into the DOM. This package also includes the `bootstrapModuleFactory()` method for bootstrapping applications for production builds that pre-compile with [AOT](aot-compiler). | | [`@angular/platform-browser-dynamic`](../api/platform-browser-dynamic) | Includes [providers](../api/core/provider) and methods to compile and run the application on the client using the [JIT compiler](aot-compiler). | | [`@angular/router`](../api/router) | The router module navigates among your application pages when the browser URL changes. For more information, see [Routing and Navigation](router). | ### Support packages The following support packages are included as dependencies in the default `package.json` file for a new Angular workspace. | Package name | Details | | --- | --- | | [`rxjs`](https://github.com/ReactiveX/rxjs) | Many Angular APIs return [*observables*](glossary#observable). RxJS is an implementation of the proposed [Observables specification](https://github.com/tc39/proposal-observable) currently before the [TC39](https://www.ecma-international.org/memento/tc39.htm) committee, which determines standards for the JavaScript language. | | [`zone.js`](https://github.com/angular/zone.js) | Angular relies on zone.js to run Angular's change detection processes when native JavaScript operations raise events. Zone.js is an implementation of a [specification](https://gist.github.com/mhevery/63fdcdf7c65886051d55) currently before the [TC39](https://www.ecma-international.org/memento/tc39.htm) committee that determines standards for the JavaScript language. | ### Polyfill packages Many browsers lack native support for some features in the latest HTML standards, features that Angular requires. [*Polyfills*](https://en.wikipedia.org/wiki/Polyfill_(programming)) can emulate the missing features. The [Browser Support](browser-support) guide explains which browsers need polyfills and how you can add them. DevDependencies --------------- The packages listed in the `devDependencies` section of `package.json` help you develop the application on your local machine. You don't deploy them with the production application. To add a new `devDependency`, use either one of the following commands: ``` npm install --save-dev <package-name> ``` ``` yarn add --dev <package-name> ``` The following `devDependencies` are provided in the default `package.json` file for a new Angular workspace. | Package name | Details | | --- | --- | | [`@angular-devkit/build-angular`](https://github.com/angular/angular-cli) | The Angular build tools. | | [`@angular/cli`](https://github.com/angular/angular-cli) | The Angular CLI tools. | | `@angular/compiler-cli` | The Angular compiler, which is invoked by the Angular CLI's `ng build` and `ng serve` commands. | | `@types/...` | TypeScript definition files for 3rd party libraries such as Jasmine and Node.js. | | `jasmine/...` | Packages to support the [Jasmine](https://jasmine.github.io) test library. | | `karma/...` | Packages to support the [karma](https://www.npmjs.com/package/karma) test runner. | | [`typescript`](https://www.npmjs.com/package/typescript) | The TypeScript language server, including the *tsc* TypeScript compiler. | Related information ------------------- For information about how the Angular CLI handles packages see the following guides: | Topics | Details | | --- | --- | | [Building and serving](build) | How packages come together to create a development build | | [Deployment](deployment) | How packages come together to create a production build | Last reviewed on Mon Feb 28 2022 angular Angular Language Service Angular Language Service ======================== The Angular Language Service provides code editors with a way to get completions, errors, hints, and navigation inside Angular templates. It works with external templates in separate HTML files, and also with in-line templates. Configuring compiler options for the Angular Language Service ------------------------------------------------------------- To enable the latest Language Service features, set the `strictTemplates` option in `tsconfig.json` by setting `strictTemplates` to `true,` as shown in the following example: ``` "angularCompilerOptions": { "strictTemplates": true } ``` For more information, see the [Angular compiler options](angular-compiler-options) guide. Features -------- Your editor autodetects that you are opening an Angular file. It then uses the Angular Language Service to read your `tsconfig.json` file, find all the templates you have in your application, and then provide language services for any templates that you open. Language services include: * Completions lists * AOT Diagnostic messages * Quick info * Go to definition ### Autocompletion Autocompletion can speed up your development time by providing you with contextual possibilities and hints as you type. This example shows autocomplete in an interpolation. As you type it out, you can press tab to complete. There are also completions within elements. Any elements you have as a component selector will show up in the completion list. ### Error checking The Angular Language Service can forewarn you of mistakes in your code. In this example, Angular doesn't know what `orders` is or where it comes from. ![error checking](https://angular.io/generated/images/guide/language-service/language-error.gif) ### Quick info and navigation The quick-info feature lets you hover to see where components, directives, and modules come from. You can then click "Go to definition" or press F12 to go directly to the definition. ![navigation](https://angular.io/generated/images/guide/language-service/language-navigation.gif) Angular Language Service in your editor --------------------------------------- Angular Language Service is currently available as an extension for [Visual Studio Code](https://code.visualstudio.com), [WebStorm](https://www.jetbrains.com/webstorm), [Sublime Text](https://www.sublimetext.com) and [Eclipse IDE](https://www.eclipse.org/eclipseide). ### Visual Studio Code In [Visual Studio Code](https://code.visualstudio.com), install the extension from the [Extensions: Marketplace](https://marketplace.visualstudio.com/items?itemName=Angular.ng-template). Open the marketplace from the editor using the Extensions icon on the left menu pane, or use VS Quick Open (⌘+P on Mac, CTRL+P on Windows) and type "? ext". In the marketplace, search for Angular Language Service extension, and click the **Install** button. The Visual Studio Code integration with the Angular language service is maintained and distributed by the Angular team. ### Visual Studio In [Visual Studio](https://visualstudio.microsoft.com), install the extension from the [Extensions: Marketplace](https://marketplace.visualstudio.com/items?itemName=TypeScriptTeam.AngularLanguageService). Open the marketplace from the editor selecting Extensions on the top menu pane, and then selecting Manage Extensions. In the marketplace, search for Angular Language Service extension, and click the **Install** button. The Visual Studio integration with the Angular language service is maintained and distributed by Microsoft with help from the Angular team. Check out the project [here](https://github.com/microsoft/vs-ng-language-service). ### WebStorm In [WebStorm](https://www.jetbrains.com/webstorm), enable the plugin [Angular and AngularJS](https://plugins.jetbrains.com/plugin/6971-angular-and-angularjs). Since WebStorm 2019.1, the `@angular/language-service` is not required anymore and should be removed from your `package.json`. ### Sublime Text In [Sublime Text](https://www.sublimetext.com), the Language Service supports only in-line templates when installed as a plug-in. You need a custom Sublime plug-in (or modifications to the current plug-in) for completions in HTML files. To use the Language Service for in-line templates, you must first add an extension to allow TypeScript, then install the Angular Language Service plug-in. Starting with TypeScript 2.3, TypeScript has a plug-in model that the language service can use. 1. Install the latest version of TypeScript in a local `node_modules` directory: ``` npm install --save-dev typescript ``` 2. Install the Angular Language Service package in the same location: ``` npm install --save-dev @angular/language-service ``` 3. Once the package is installed, add the following to the `"compilerOptions"` section of your project's `tsconfig.json`. ``` "plugins": [ {"name": "@angular/language-service"} ] ``` 4. In your editor's user preferences (`Cmd+,` or `Ctrl+,`), add the following: ``` "typescript-tsdk": "<path to your folder>/node_modules/typescript/lib" ``` This lets the Angular Language Service provide diagnostics and completions in `.ts` files. ### Eclipse IDE Either directly install the "Eclipse IDE for Web and JavaScript developers" package which comes with the Angular Language Server included, or from other Eclipse IDE packages, use Help > Eclipse Marketplace to find and install [Eclipse Wild Web Developer](https://marketplace.eclipse.org/content/wild-web-developer-html-css-javascript-typescript-nodejs-angular-json-yaml-kubernetes-xml). How the Language Service works ------------------------------ When you use an editor with a language service, the editor starts a separate language-service process and communicates with it through an [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call), using the [Language Server Protocol](https://microsoft.github.io/language-server-protocol). When you type into the editor, the editor sends information to the language-service process to track the state of your project. When you trigger a completion list within a template, the editor first parses the template into an HTML [abstract syntax tree (AST)](https://en.wikipedia.org/wiki/Abstract_syntax_tree). The Angular compiler interprets that tree to determine the context: which module the template is part of, the current scope, the component selector, and where your cursor is in the template AST. It can then determine the symbols that could potentially be at that position. It's a little more involved if you are in an interpolation. If you have an interpolation of `{{data.---}}` inside a `div` and need the completion list after `data.---`, the compiler can't use the HTML AST to find the answer. The HTML AST can only tell the compiler that there is some text with the characters "`{{data.---}}`". That's when the template parser produces an expression AST, which resides within the template AST. The Angular Language Services then looks at `data.---` within its context, asks the TypeScript Language Service what the members of `data` are, and returns the list of possibilities. More information ---------------- * For more in-depth information on the implementation, see the [Angular Language Service API](https://github.com/angular/angular/blob/main/packages/language-service/src/types.ts) * For more on the design considerations and intentions, see [design documentation here](https://github.com/angular/vscode-ng-language-service/wiki/Design) * See also [Chuck Jazdzewski's presentation](https://www.youtube.com/watch?v=ez3R0Gi4z5A&t=368s) on the Angular Language Service from [ng-conf](https://www.ng-conf.org) 2017 Last reviewed on Mon Feb 28 2022 angular Angular package format Angular package format ====================== This document describes the Angular Package Format (APF). APF is an Angular specific specification for the structure and format of npm packages that is used by all first-party Angular packages (`@angular/core`, `@angular/material`, etc.) and most third-party Angular libraries. APF enables a package to work seamlessly under most common scenarios that use Angular. Packages that use APF are compatible with the tooling offered by the Angular team as well as wider JavaScript ecosystem. It is recommended that third-party library developers follow the same npm package format. > APF is versioned along with the rest of Angular, and every major version improves the package format. You can find the versions of the specification prior to v13 in this [google doc](https://docs.google.com/document/d/1CZC2rcpxffTDfRDs6p1cfbmKNLA6x5O-NtkJglDaBVs/preview). > > Why specify a package format? ----------------------------- In today's JavaScript landscape, developers consume packages in many different ways, using many different toolchains (Webpack, rollup, esbuild, etc.). These tools may understand and require different inputs - some tools may be able to process the latest ES language version, while others may benefit from directly consuming an older ES version. The Angular distribution format supports all of the commonly used development tools and workflows, and adds emphasis on optimizations that result either in smaller application payload size or faster development iteration cycle (build time). Developers can rely on Angular CLI and [ng-packagr](https://github.com/ng-packagr/ng-packagr) (a build tool Angular CLI uses) to produce packages in the Angular package format. See the [Creating Libraries](creating-libraries) guide for more details. File layout ----------- The following example shows a simplified version of the `@angular/core` package's file layout, with an explanation for each file in the package. ``` node_modules/@angular/core README.md package.json index.d.ts esm2020 core.mjs index.mjs public_api.mjs testing fesm2015 core.mjs core.mjs.map testing.mjs testing.mjs.map fesm2020 core.mjs core.mjs.map testing.mjs testing.mjs.map testing index.d.ts ``` This table describes the file layout under `node_modules/@angular/core` annotated to describe the purpose of files and directories: | Files | Purpose | | --- | --- | | `README.md` | Package README, used by npmjs web UI. | | `package.json` | Primary `package.json`, describing the package itself as well as all available entrypoints and code formats. This file contains the "exports" mapping used by runtimes and tools to perform module resolution. | | `index.d.ts` | Bundled `.d.ts` for the primary entrypoint `@angular/core`. | | `esm2020/` ─ `core.mjs` ─ `index.mjs` ─ `public_api.mjs` | Tree of `@angular/core` sources in unflattened ES2020 format. | | `esm2020/testing/` | Tree of the `@angular/core/testing` entrypoint in unflattened ES2020 format. | | `fesm2015/` ─ `core.mjs` ─ `core.mjs.map` ─ `testing.mjs` ─ `testing.mjs.map` | Code for all entrypoints in a flattened (FESM) ES2015 format, along with source maps. | | `fesm2020/` ─ `core.mjs` ─ `core.mjs.map` ─ `testing.mjs` ─ `testing.mjs.map` | Code for all entrypoints in flattened (FESM) ES2020 format, along with source maps. | | `testing/` | Directory representing the "testing" entrypoint. | | `testing/index.d.ts` | Bundled `.d.ts` for the `@angular/core/testing` entrypoint. | `package.json` -------------- The primary `package.json` contains important package metadata, including the following: * It [declares](angular-package-format#esm-declaration) the package to be in EcmaScript Module (ESM) format * It contains an [`"exports"` field](angular-package-format#exports) which defines the available source code formats of all entrypoints * It contains [keys](angular-package-format#legacy-resolution-keys) which define the available source code formats of the primary `@angular/core` entrypoint, for tools which do not understand `"exports"`. These keys are considered deprecated, and could be removed as the support for `"exports"` rolls out across the ecosystem. * It declares whether the package contains [side effects](angular-package-format#side-effects) ### ESM declaration The top-level `package.json` contains the key: ``` { "type": "module" } ``` This informs resolvers that code within the package is using EcmaScript Modules as opposed to CommonJS modules. ### `"exports"` The `"exports"` field has the following structure: ``` "exports": { "./schematics/*": { "default": "./schematics/*.js" }, "./package.json": { "default": "./package.json" }, ".": { "types": "./core.d.ts", "esm2020": "./esm2020/core.mjs", "es2020": "./fesm2020/core.mjs", "es2015": "./fesm2015/core.mjs", "node": "./fesm2015/core.mjs", "default": "./fesm2020/core.mjs" }, "./testing": { "types": "./testing/testing.d.ts", "esm2020": "./esm2020/testing/testing.mjs", "es2020": "./fesm2020/testing.mjs", "es2015": "./fesm2015/testing.mjs", "node": "./fesm2015/testing.mjs", "default": "./fesm2020/testing.mjs" } } ``` Of primary interest are the `"."` and the `"./testing"` keys, which define the available code formats for the `@angular/core` primary entrypoint and the `@angular/core/testing` secondary entrypoint, respectively. For each entrypoint, the available formats are: | Formats | Details | | --- | --- | | Typings (`.d.ts` files) | `.d.ts` files are used by TypeScript when depending on a given package. | | `es2020` | ES2020 code flattened into a single source file. | | `es2015` | ES2015 code flattened into a single source file. | | `esm2020` | ES2020 code in unflattened source files (this format is included for experimentation - see [this discussion of defaults](angular-package-format#note-about-the-defaults-in-packagejson) for details). | Tooling that is aware of these keys may preferentially select a desirable code format from `"exports"`. The remaining 2 keys control the default behavior of tooling: * `"node"` selects flattened ES2015 code when the package is loaded in Node. This format is used due to the requirements of `zone.js`, which does not support native `[async](../api/common/asyncpipe)`/`await` ES2017 syntax. Therefore, Node is instructed to use ES2015 code, where `[async](../api/common/asyncpipe)`/`await` structures have been downleveled into Promises. * `"default"` selects flattened ES2020 code for all other consumers. Libraries may want to expose additional static files which are not captured by the exports of the JavaScript-based entry-points such as Sass mixins or pre-compiled CSS. For more information, see [Managing assets in a library](creating-libraries#managing-assets-in-a-library). ### Legacy resolution keys In addition to `"exports"`, the top-level `package.json` also defines legacy module resolution keys for resolvers that don't support `"exports"`. For `@angular/core` these are: ``` { "fesm2020": "./fesm2020/core.mjs", "fesm2015": "./fesm2015/core.mjs", "esm2020": "./esm2020/core.mjs", "typings": "./core.d.ts", "module": "./fesm2015/core.mjs", "es2020": "./fesm2020/core.mjs", } ``` As shown in the preceding code snippet, a module resolver can use these keys to load a specific code format. > **NOTE**: Instead of `"default"`, `"module"` selects the format both for Node as well as any tooling not configured to use a specific key. As with `"node"`, ES2015 code is selected due to the constraints of ZoneJS. > > ### Side effects The last function of `package.json` is to declare whether the package has [side effects](angular-package-format#sideeffects-flag). ``` { "sideEffects": false } ``` Most Angular packages should not depend on top-level side effects, and thus should include this declaration. Entrypoints and code splitting ------------------------------ Packages in the Angular Package Format contain one primary entrypoint and zero or more secondary entrypoints (for example, `@angular/common/[http](../api/common/http)`). Entrypoints serve several functions. 1. They define the module specifiers from which users import code (for example, `@angular/core` and `@angular/core/testing`). Users typically perceive these entrypoints as distinct groups of symbols, with different purposes or capability. Specific entrypoints might only be used for special purposes, such as testing. Such APIs can be separated out from the primary entrypoint to reduce the chance of them being used accidentally or incorrectly. 2. They define the granularity at which code can be lazily loaded. Many modern build tools are only capable of "code splitting" (aka lazy loading) at the ES Module level. The Angular Package Format uses primarily a single "flat" ES Module per entry point. This means that most build tooling is not able to split code with a single entry point into multiple output chunks. The general rule for APF packages is to use entrypoints for the smallest sets of logically connected code possible. For example, the Angular Material package publishes each logical component or set of components as a separate entrypoint - one for Button, one for Tabs, etc. This allows each Material component to be lazily loaded separately, if desired. Not all libraries require such granularity. Most libraries with a single logical purpose should be published as a single entrypoint. `@angular/core` for example uses a single entrypoint for the runtime, because the Angular runtime is generally used as a single entity. ### Resolution of secondary entry points Secondary entrypoints can be resolved via the `"exports"` field of the `package.json` for the package. README.md --------- The README file in the Markdown format that is used to display description of a package on npm and GitHub. Example README content of @angular/core package: ``` Angular ======= The sources for this package are in the main [Angular](https://github.com/angular/angular) repo.Please file issues and pull requests against that repo. License: MIT ``` Partial compilation ------------------- Libraries in the Angular Package Format must be published in "partial compilation" mode. This is a compilation mode for `ngc` which produces compiled Angular code that is not tied to a specific Angular runtime version, in contrast to the full compilation used for applications, where the Angular compiler and runtime versions must match exactly. To partially compile Angular code, use the `compilationMode` flag in the `angularCompilerOptions` property of your `tsconfig.json`: ``` { … "angularCompilerOptions": { "compilationMode": "partial", } } ``` Partially compiled library code is then converted to fully compiled code during the application build process by the Angular CLI. If your build pipeline does not use the Angular CLI then refer to the [Consuming partial ivy code outside the Angular CLI](creating-libraries#consuming-partial-ivy-code-outside-the-angular-cli) guide. Optimizations ------------- ### Flattening of ES modules The Angular Package Format specifies that code be published in "flattened" ES module format. This significantly reduces the build time of Angular applications as well as download and parse time of the final application bundle. Please check out the excellent post ["The cost of small modules"](https://nolanlawson.com/2016/08/15/the-cost-of-small-modules) by Nolan Lawson. The Angular compiler can generate index ES module files. Tools like Rollup can use these files to generate flattened modules in a *Flattened ES Module* (FESM) file format. FESM is a file format created by flattening all ES Modules accessible from an entrypoint into a single ES Module. It's formed by following all imports from a package and copying that code into a single file while preserving all public ES exports and removing all private imports. The abbreviated name, FESM, pronounced *phe-som*, can be followed by a number such as FESM5 or FESM2015. The number refers to the language level of the JavaScript inside the module. Accordingly a FESM5 file would be ESM+ES5 and include import/export statements and ES5 source code. To generate a flattened ES Module index file, use the following configuration options in your tsconfig.json file: ``` { "compilerOptions": { … "module": "esnext", "target": "es2020", … }, "angularCompilerOptions": { … "flatModuleOutFile": "my-ui-lib.js", "flatModuleId": "my-ui-lib" } } ``` Once the index file (for example, `my-ui-lib.js`) is generated by ngc, bundlers and optimizers like Rollup can be used to produce the flattened ESM file. #### Note about the defaults in package.json As of webpack v4, the flattening of ES modules optimization should not be necessary for webpack users. It should be possible to get better code-splitting without flattening of modules in webpack. In practice, size regressions can still be seen when using unflattened modules as input for webpack v4. This is why `module` and `es2020` package.json entries still point to FESM files. This issue is being investigated. It is expected to switch the `module` and `es2020` package.json entry points to unflattened files after the size regression issue is resolved. The APF currently includes unflattened ESM2020 code for the purpose of validating such a future change. ### "sideEffects" flag By default, EcmaScript Modules are side-effectful: importing from a module ensures that any code at the top level of that module should run. This is often undesirable, as most side-effectful code in typical modules is not truly side-effectful, but instead only affects specific symbols. If those symbols are not imported and used, it's often desirable to remove them in an optimization process known as tree-shaking, and the side-effectful code can prevent this. Build tools such as Webpack support a flag which allows packages to declare that they do not depend on side-effectful code at the top level of their modules, giving the tools more freedom to tree-shake code from the package. The end result of these optimizations should be smaller bundle size and better code distribution in bundle chunks after code-splitting. This optimization can break your code if it contains non-local side-effects - this is however not common in Angular applications and it's usually a sign of bad design. The recommendation is for all packages to claim the side-effect free status by setting the `sideEffects` property to `false`, and that developers follow the [Angular Style Guide](styleguide) which naturally results in code without non-local side-effects. More info: [webpack docs on side effects](https://github.com/webpack/webpack/tree/master/examples/side-effects) ### ES2020 language level ES2020 Language level is now the default language level that is consumed by Angular CLI and other tooling. The Angular CLI down-levels the bundle to a language level that is supported by all targeted browsers at application build time. ### d.ts bundling / type definition flattening As of APF v8 it is now preferred to run [API Extractor](https://api-extractor.com), to bundle TypeScript definitions so that the entire API appears in a single file. In prior APF versions each entry point would have a `src` directory next to the .d.ts entry point and this directory contained individual d.ts files matching the structure of the original source code. While this distribution format is still allowed and supported, it is highly discouraged because it confuses tools like IDEs that then offer incorrect autocompletion, and allows users to depend on deep-import paths which are typically not considered to be public API of a library or a package. ### Tslib As of APF v10, it is recommended to add tslib as a direct dependency of your primary entry-point. This is because the tslib version is tied to the TypeScript version used to compile your library. Examples -------- * [@angular/core package](https://unpkg.com/browse/@angular/[email protected]) * [@angular/material package](https://unpkg.com/browse/@angular/[email protected]) Definition of terms ------------------- The following terms are used throughout this document intentionally. In this section are the definitions of all of them to provide additional clarity. #### Package The smallest set of files that are published to NPM and installed together, for example `@angular/core`. This package includes a manifest called package.json, compiled source code, typescript definition files, source maps, metadata, etc. The package is installed with `npm install @angular/core`. #### Symbol A class, function, constant, or variable contained in a module and optionally made visible to the external world via a module export. #### Module Short for ECMAScript Modules. A file containing statements that import and export symbols. This is identical to the definition of modules in the ECMAScript spec. #### ESM Short for ECMAScript Modules (see above). #### FESM Short for Flattened ES Modules and consists of a file format created by flattening all ES Modules accessible from an entry point into a single ES Module. #### Module ID The identifier of a module used in the import statements (for example, `@angular/core`). The ID often maps directly to a path on the filesystem, but this is not always the case due to various module resolution strategies. #### Module specifier A module identifier (see above). #### Module resolution strategy Algorithm used to convert Module IDs to paths on the filesystem. Node.js has one that is well specified and widely used, TypeScript supports several module resolution strategies, [Closure Compiler](https://developers.google.com/closure/compiler) has yet another strategy. #### Module format Specification of the module syntax that covers at minimum the syntax for the importing and exporting from a file. Common module formats are CommonJS (CJS, typically used for Node.js applications) or ECMAScript Modules (ESM). The module format indicates only the packaging of the individual modules, but not the JavaScript language features used to make up the module content. Because of this, the Angular team often uses the language level specifier as a suffix to the module format, (for example, ESM+ES2015 specifies that the module is in ESM format and contains code down-leveled to ES2015). #### Bundle An artifact in the form of a single JS file, produced by a build tool (for example, [Webpack](https://webpack.js.org) or [Rollup](https://rollupjs.org)) that contains symbols originating in one or more modules. Bundles are a browser-specific workaround that reduce network strain that would be caused if browsers were to start downloading hundreds if not tens of thousands of files. Node.js typically doesn't use bundles. Common bundle formats are UMD and System.register. #### Language level The language of the code (ES2015 or ES2020). Independent of the module format. #### Entry point A module intended to be imported by the user. It is referenced by a unique module ID and exports the public API referenced by that module ID. An example is `@angular/core` or `@angular/core/testing`. Both entry points exist in the `@angular/core` package, but they export different symbols. A package can have many entry points. #### Deep import A process of retrieving symbols from modules that are not Entry Points. These module IDs are usually considered to be private APIs that can change over the lifetime of the project or while the bundle for the given package is being created. #### Top-Level import An import coming from an entry point. The available top-level imports are what define the public API and are exposed in "@angular/name" modules, such as `@angular/core` or `@angular/common`. #### Tree-shaking The process of identifying and removing code not used by an application - also known as dead code elimination. This is a global optimization performed at the application level using tools like [Rollup](https://rollupjs.org), [Closure Compiler](https://developers.google.com/closure/compiler), or [Terser](https://github.com/terser/terser). #### AOT compiler The Ahead of Time Compiler for Angular. #### Flattened type definitions The bundled TypeScript definitions generated from [API Extractor](https://api-extractor.com). Last reviewed on Mon Feb 28 2022
programming_docs
angular Prerendering static pages Prerendering static pages ========================= Angular Universal lets you prerender the pages of your application. Prerendering is the process where a dynamic page is processed at build time generating static HTML. How to prerender a page ----------------------- To prerender a static page make sure to add Server-Side Rendering (SSR) capabilities to your application. For more information see the [universal guide](universal). Once SSR is added, run the following command: ``` npm run prerender ``` ### Build options for prerendering When you add prerendering to your application, the following build options are available: | Options | Details | | --- | --- | | `browserTarget` | Specify the target to build. | | `serverTarget` | Specify the Server target to use for prerendering the application. | | `routes` | Define an array of extra routes to prerender. | | `guessRoutes` | Whether builder should extract routes and guess which paths to render. Defaults to `true`. | | `routesFile` | Specify a file that contains a list of all routes to prerender, separated by newlines. This option is useful if you have a large number of routes. | | `numProcesses` | Specify the number of CPUs to be used while running the prerendering command. | ### Prerendering dynamic routes You can prerender dynamic routes. An example of a dynamic route is `product/:id`, where `id` is dynamically provided. To prerender dynamic routes, choose one from the following options: * Provide extra routes in the command line * Provide routes using a file * Prerender specific routes #### Provide extra routes in the command line While running the prerender command, you can provide extra routes. For example: ``` ng run <app-name>:prerender --routes /product/1 /product/2 ``` #### Providing extra routes using a file You can provide routes using a file to create static pages. This method is useful if you have a large number of routes to create. For example, product details for an e-commerce application, which might come from an external source, like a Database or Content Management System (CMS). To provide routes using a file, use the `--routes-file` option with the name of a `.txt` file containing the routes. For example, you could create this file by using a script to extract IDs from a database and save them to a `routes.txt` file: ``` /products/1 /products/555 ``` When your `.txt` file is ready, run the following command to prerender the static files with dynamic values: ``` ng run <app-name>:prerender --routes-file routes.txt ``` #### Prerendering specific routes You can also pass specific routes to the prerender command. If you choose this option, make sure to turn off the `guessRoutes` option. ``` ng run <app-name>:prerender --no-guess-routes --routes /product/1 /product/1 ``` Last reviewed on Mon Feb 28 2022 angular Common Routing Tasks Common Routing Tasks ==================== This topic describes how to implement many of the common tasks associated with adding the Angular router to your application. Generate an application with routing enabled -------------------------------------------- The following command uses the Angular CLI to generate a basic Angular application with an application routing module, called `AppRoutingModule`, which is an NgModule where you can configure your routes. The application name in the following example is `routing-app`. ``` ng new routing-app --routing --defaults ``` ### Adding components for routing To use the Angular router, an application needs to have at least two components so that it can navigate from one to the other. To create a component using the CLI, enter the following at the command line where `first` is the name of your component: ``` ng generate component first ``` Repeat this step for a second component but give it a different name. Here, the new name is `second`. ``` ng generate component second ``` The CLI automatically appends `[Component](../api/core/component)`, so if you were to write `first-component`, your component would be `FirstComponentComponent`. > This guide works with a CLI-generated Angular application. If you are working manually, make sure that you have `<base href="/">` in the `<head>` of your index.html file. This assumes that the `app` folder is the application root, and uses `"/"`. > > ### Importing your new components To use your new components, import them into `AppRoutingModule` at the top of the file, as follows: ``` import { FirstComponent } from './first/first.component'; import { SecondComponent } from './second/second.component'; ``` Defining a basic route ---------------------- There are three fundamental building blocks to creating a route. Import the `AppRoutingModule` into `AppModule` and add it to the `imports` array. The Angular CLI performs this step for you. However, if you are creating an application manually or working with an existing, non-CLI application, verify that the imports and configuration are correct. The following is the default `AppModule` using the CLI with the `--routing` flag. ``` import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; // CLI imports AppRoutingModule import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule // CLI adds AppRoutingModule to the AppModule's imports array ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` 1. Import `[RouterModule](../api/router/routermodule)` and `[Routes](../api/router/routes)` into your routing module. The Angular CLI performs this step automatically. The CLI also sets up a `[Routes](../api/router/routes)` array for your routes and configures the `imports` and `exports` arrays for `@[NgModule](../api/core/ngmodule)()`. ``` import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; // CLI imports router const routes: Routes = []; // sets up routes constant where you define your routes // configures NgModule imports and exports @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } ``` 2. Define your routes in your `[Routes](../api/router/routes)` array. Each route in this array is a JavaScript object that contains two properties. The first property, `path`, defines the URL path for the route. The second property, `component`, defines the component Angular should use for the corresponding path. ``` const routes: Routes = [ { path: 'first-component', component: FirstComponent }, { path: 'second-component', component: SecondComponent }, ]; ``` 3. Add your routes to your application. Now that you have defined your routes, add them to your application. First, add links to the two components. Assign the anchor tag that you want to add the route to the `[routerLink](../api/router/routerlink)` attribute. Set the value of the attribute to the component to show when a user clicks on each link. Next, update your component template to include `<[router-outlet](../api/router/routeroutlet)>`. This element informs Angular to update the application view with the component for the selected route. ``` <h1>Angular Router App</h1> <!-- This nav gives you links to click, which tells the router which route to use (defined in the routes constant in AppRoutingModule) --> <nav> <ul> <li><a routerLink="/first-component" routerLinkActive="active" ariaCurrentWhenActive="page">First Component</a></li> <li><a routerLink="/second-component" routerLinkActive="active" ariaCurrentWhenActive="page">Second Component</a></li> </ul> </nav> <!-- The routed views render in the <router-outlet>--> <router-outlet></router-outlet> ``` ### Route order The order of routes is important because the `[Router](../api/router/router)` uses a first-match wins strategy when matching routes, so more specific routes should be placed above less specific routes. List routes with a static path first, followed by an empty path route, which matches the default route. The [wildcard route](router#setting-up-wildcard-routes) comes last because it matches every URL and the `[Router](../api/router/router)` selects it only if no other routes match first. Getting route information ------------------------- Often, as a user navigates your application, you want to pass information from one component to another. For example, consider an application that displays a shopping list of grocery items. Each item in the list has a unique `id`. To edit an item, users click an Edit button, which opens an `EditGroceryItem` component. You want that component to retrieve the `id` for the grocery item so it can display the right information to the user. Use a route to pass this type of information to your application components. To do so, you use the [ActivatedRoute](../api/router/activatedroute) interface. To get information from a route: 1. Import `[ActivatedRoute](../api/router/activatedroute)` and `[ParamMap](../api/router/parammap)` to your component. ``` import { Router, ActivatedRoute, ParamMap } from '@angular/router'; ``` These `import` statements add several important elements that your component needs. To learn more about each, see the following API pages: * [`Router`](../api/router) * [`ActivatedRoute`](../api/router/activatedroute) * [`ParamMap`](../api/router/parammap) 2. Inject an instance of `[ActivatedRoute](../api/router/activatedroute)` by adding it to your application's constructor: ``` constructor( private route: ActivatedRoute, ) {} ``` 3. Update the `ngOnInit()` method to access the `[ActivatedRoute](../api/router/activatedroute)` and track the `name` parameter: ``` ngOnInit() { this.route.queryParams.subscribe(params => { this.name = params['name']; }); } ``` > **NOTE**: The preceding example uses a variable, `name`, and assigns it the value based on the `name` parameter. > > Setting up wildcard routes -------------------------- A well-functioning application should gracefully handle when users attempt to navigate to a part of your application that does not exist. To add this functionality to your application, you set up a wildcard route. The Angular router selects this route any time the requested URL doesn't match any router paths. To set up a wildcard route, add the following code to your `routes` definition. ``` { path: '**', component: <component-name> } ``` The two asterisks, `**`, indicate to Angular that this `routes` definition is a wildcard route. For the component property, you can define any component in your application. Common choices include an application-specific `PageNotFoundComponent`, which you can define to [display a 404 page](router#404-page-how-to) to your users; or a redirect to your application's main component. A wildcard route is the last route because it matches any URL. For more detail on why order matters for routes, see [Route order](router#route-order). Displaying a 404 page --------------------- To display a 404 page, set up a [wildcard route](router#wildcard-route-how-to) with the `component` property set to the component you'd like to use for your 404 page as follows: ``` const routes: Routes = [ { path: 'first-component', component: FirstComponent }, { path: 'second-component', component: SecondComponent }, { path: '**', component: PageNotFoundComponent }, // Wildcard route for a 404 page ]; ``` The last route with the `path` of `**` is a wildcard route. The router selects this route if the requested URL doesn't match any of the paths earlier in the list and sends the user to the `PageNotFoundComponent`. Setting up redirects -------------------- To set up a redirect, configure a route with the `path` you want to redirect from, the `component` you want to redirect to, and a `pathMatch` value that tells the router how to match the URL. ``` const routes: Routes = [ { path: 'first-component', component: FirstComponent }, { path: 'second-component', component: SecondComponent }, { path: '', redirectTo: '/first-component', pathMatch: 'full' }, // redirect to `first-component` { path: '**', component: PageNotFoundComponent }, // Wildcard route for a 404 page ]; ``` In this example, the third route is a redirect so that the router defaults to the `first-component` route. Notice that this redirect precedes the wildcard route. Here, `path: ''` means to use the initial relative URL (`''`). For more details on `pathMatch` see [Spotlight on `pathMatch`](router-tutorial-toh#pathmatch). Nesting routes -------------- As your application grows more complex, you might want to create routes that are relative to a component other than your root component. These types of nested routes are called child routes. This means you're adding a second `<[router-outlet](../api/router/routeroutlet)>` to your app, because it is in addition to the `<[router-outlet](../api/router/routeroutlet)>` in `AppComponent`. In this example, there are two additional child components, `child-a`, and `child-b`. Here, `FirstComponent` has its own `<nav>` and a second `<[router-outlet](../api/router/routeroutlet)>` in addition to the one in `AppComponent`. ``` <h2>First Component</h2> <nav> <ul> <li><a routerLink="child-a">Child A</a></li> <li><a routerLink="child-b">Child B</a></li> </ul> </nav> <router-outlet></router-outlet> ``` A child route is like any other route, in that it needs both a `path` and a `component`. The one difference is that you place child routes in a `children` array within the parent route. ``` const routes: Routes = [ { path: 'first-component', component: FirstComponent, // this is the component with the <router-outlet> in the template children: [ { path: 'child-a', // child route path component: ChildAComponent, // child route component that the router renders }, { path: 'child-b', component: ChildBComponent, // another child route component that the router renders }, ], }, ]; ``` Setting the page title ---------------------- Each page in your application should have a unique title so that they can be identified in the browser history. The `[Router](../api/router/router)` sets the document's title using the `title` property from the `[Route](../api/router/route)` config. ``` const routes: Routes = [ { path: 'first-component', title: 'First component', component: FirstComponent, // this is the component with the <router-outlet> in the template children: [ { path: 'child-a', // child route path title: resolvedChildATitle, component: ChildAComponent, // child route component that the router renders }, { path: 'child-b', title: 'child b', component: ChildBComponent, // another child route component that the router renders }, ], }, ]; const resolvedChildATitle: ResolveFn<string> = () => Promise.resolve('child a'); ``` > **NOTE**: The `title` property follows the same rules as static route `data` and dynamic values that implement `[ResolveFn](../api/router/resolvefn)`. > > You can also provide a custom title strategy by extending the `[TitleStrategy](../api/router/titlestrategy)`. ``` @Injectable({providedIn: 'root'}) export class TemplatePageTitleStrategy extends TitleStrategy { constructor(private readonly title: Title) { super(); } override updateTitle(routerState: RouterStateSnapshot) { const title = this.buildTitle(routerState); if (title !== undefined) { this.title.setTitle(`My Application | ${title}`); } } } @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule], providers: [ {provide: TitleStrategy, useClass: TemplatePageTitleStrategy}, ] }) export class AppRoutingModule { } ``` Using relative paths -------------------- Relative paths let you define paths that are relative to the current URL segment. The following example shows a relative route to another component, `second-component`. `FirstComponent` and `SecondComponent` are at the same level in the tree, however, the link to `SecondComponent` is situated within the `FirstComponent`, meaning that the router has to go up a level and then into the second directory to find the `SecondComponent`. Rather than writing out the whole path to get to `SecondComponent`, use the `../` notation to go up a level. ``` <h2>First Component</h2> <nav> <ul> <li><a routerLink="../second-component">Relative Route to second component</a></li> </ul> </nav> <router-outlet></router-outlet> ``` In addition to `../`, use `./` or no leading slash to specify the current level. ### Specifying a relative route To specify a relative route, use the `[NavigationExtras](../api/router/navigationextras)` `relativeTo` property. In the component class, import `[NavigationExtras](../api/router/navigationextras)` from the `@angular/router`. Then use `relativeTo` in your navigation method. After the link parameters array, which here contains `items`, add an object with the `relativeTo` property set to the `[ActivatedRoute](../api/router/activatedroute)`, which is `this.route`. ``` goToItems() { this.router.navigate(['items'], { relativeTo: this.route }); } ``` The `goToItems()` method interprets the destination URI as relative to the activated route and navigates to the `items` route. Accessing query parameters and fragments ---------------------------------------- Sometimes, a feature of your application requires accessing a part of a route, such as a query parameter or a fragment. The Tour of Heroes application at this stage in the tutorial uses a list view in which you can click on a hero to see details. The router uses an `id` to show the correct hero's details. First, import the following members in the component you want to navigate from. ``` import { ActivatedRoute } from '@angular/router'; import { Observable } from 'rxjs'; import { switchMap } from 'rxjs/operators'; ``` Next inject the activated route service: ``` constructor(private route: ActivatedRoute) {} ``` Configure the class so that you have an observable, `heroes$`, a `selectedId` to hold the `id` number of the hero, and the heroes in the `ngOnInit()`, add the following code to get the `id` of the selected hero. This code snippet assumes that you have a heroes list, a hero service, a function to get your heroes, and the HTML to render your list and details, just as in the Tour of Heroes example. ``` heroes$: Observable<Hero[]>; selectedId: number; heroes = HEROES; ngOnInit() { this.heroes$ = this.route.paramMap.pipe( switchMap(params => { this.selectedId = Number(params.get('id')); return this.service.getHeroes(); }) ); } ``` Next, in the component that you want to navigate to, import the following members. ``` import { Router, ActivatedRoute, ParamMap } from '@angular/router'; import { Observable } from 'rxjs'; ``` Inject `[ActivatedRoute](../api/router/activatedroute)` and `[Router](../api/router/router)` in the constructor of the component class so they are available to this component: ``` hero$: Observable<Hero>; constructor( private route: ActivatedRoute, private router: Router ) {} ngOnInit() { const heroId = this.route.snapshot.paramMap.get('id'); this.hero$ = this.service.getHero(heroId); } gotoItems(hero: Hero) { const heroId = hero ? hero.id : null; // Pass along the hero id if available // so that the HeroList component can select that item. this.router.navigate(['/heroes', { id: heroId }]); } ``` Lazy loading ------------ You can configure your routes to lazy load modules, which means that Angular only loads modules as needed, rather than loading all modules when the application launches. Additionally, preload parts of your application in the background to improve the user experience. For more information on lazy loading and preloading see the dedicated guide [Lazy loading NgModules](lazy-loading-ngmodules). Preventing unauthorized access ------------------------------ Use route guards to prevent users from navigating to parts of an application without authorization. The following route guards are available in Angular: * [`canActivate`](../api/router/canactivatefn) * [`canActivateChild`](../api/router/canactivatechildfn) * [`canDeactivate`](../api/router/candeactivatefn) * [`canMatch`](../api/router/canmatchfn) * [`resolve`](../api/router/resolvefn) * [`canLoad`](../api/router/canloadfn) To use route guards, consider using [component-less routes](../api/router/route#componentless-routes) as this facilitates guarding child routes. Create a service for your guard: In your guard function, implement the guard you want to use. The following example uses `canActivate` to guard the route. ``` export const yourGuard: CanActivateFn = ( next: ActivatedRouteSnapshot, state: RouterStateSnapshot) => { // your logic goes here } ``` In your routing module, use the appropriate property in your `routes` configuration. Here, `canActivate` tells the router to mediate navigation to this particular route. ``` { path: '/your-path', component: YourComponent, canActivate: [yourGuard], } ``` For more information with a working example, see the [routing tutorial section on route guards](router-tutorial-toh#milestone-5-route-guards). Link parameters array --------------------- A link parameters array holds the following ingredients for router navigation: * The path of the route to the destination component * Required and optional route parameters that go into the route URL Bind the `[RouterLink](../api/router/routerlink)` directive to such an array like this: ``` <a [routerLink]="['/heroes']">Heroes</a> ``` The following is a two-element array when specifying a route parameter: ``` <a [routerLink]="['/hero', hero.id]"> <span class="badge">{{ hero.id }}</span>{{ hero.name }} </a> ``` Provide optional route parameters in an object, as in `{ foo: 'foo' }`: ``` <a [routerLink]="['/crisis-center', { foo: 'foo' }]">Crisis Center</a> ``` These three examples cover the needs of an application with one level of routing. However, with a child router, such as in the crisis center, you create new link array possibilities. The following minimal `[RouterLink](../api/router/routerlink)` example builds upon a specified [default child route](router-tutorial-toh#a-crisis-center-with-child-routes) for the crisis center. ``` <a [routerLink]="['/crisis-center']">Crisis Center</a> ``` Review the following: * The first item in the array identifies the parent route (`/crisis-center`) * There are no parameters for this parent route * There is no default for the child route so you need to pick one * You're navigating to the `CrisisListComponent`, whose route path is `/`, but you don't need to explicitly add the slash Consider the following router link that navigates from the root of the application down to the Dragon Crisis: ``` <a [routerLink]="['/crisis-center', 1]">Dragon Crisis</a> ``` * The first item in the array identifies the parent route (`/crisis-center`) * There are no parameters for this parent route * The second item identifies the child route details about a particular crisis (`/:id`) * The details child route requires an `id` route parameter * You added the `id` of the Dragon Crisis as the second item in the array (`1`) * The resulting path is `/crisis-center/1` You could also redefine the `AppComponent` template with Crisis Center routes exclusively: ``` template: ` <h1 class="title">Angular Router</h1> <nav> <a [routerLink]="['/crisis-center']">Crisis Center</a> <a [routerLink]="['/crisis-center/1', { foo: 'foo' }]">Dragon Crisis</a> <a [routerLink]="['/crisis-center/2']">Shark Crisis</a> </nav> <router-outlet></router-outlet> ` ``` In summary, you can write applications with one, two or more levels of routing. The link parameters array affords the flexibility to represent any routing depth and any legal sequence of route paths, (required) router parameters, and (optional) route parameter objects. `[LocationStrategy](../api/common/locationstrategy)` and browser URL styles ---------------------------------------------------------------------------- When the router navigates to a new component view, it updates the browser's location and history with a URL for that view. Modern HTML5 browsers support [history.pushState](https://developer.mozilla.org/docs/Web/API/History_API/Working_with_the_History_API#adding_and_modifying_history_entries "HTML5 browser history push-state"), a technique that changes a browser's location and history without triggering a server page request. The router can compose a "natural" URL that is indistinguishable from one that would otherwise require a page load. Here's the Crisis Center URL in this "HTML5 pushState" style: ``` localhost:3002/crisis-center ``` Older browsers send page requests to the server when the location URL changes unless the change occurs after a "#" (called the "hash"). Routers can take advantage of this exception by composing in-application route URLs with hashes. Here's a "hash URL" that routes to the Crisis Center. ``` localhost:3002/src/#/crisis-center ``` The router supports both styles with two `[LocationStrategy](../api/common/locationstrategy)` providers: | Providers | Details | | --- | --- | | `[PathLocationStrategy](../api/common/pathlocationstrategy)` | The default "HTML5 pushState" style. | | `[HashLocationStrategy](../api/common/hashlocationstrategy)` | The "hash URL" style. | The `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` function sets the `[LocationStrategy](../api/common/locationstrategy)` to the `[PathLocationStrategy](../api/common/pathlocationstrategy)`, which makes it the default strategy. You also have the option of switching to the `[HashLocationStrategy](../api/common/hashlocationstrategy)` with an override during the bootstrapping process. > For more information on providers and the bootstrap process, see [Dependency Injection](dependency-injection-providers). > > Choosing a routing strategy --------------------------- You must choose a routing strategy early in the development of your project because once the application is in production, visitors to your site use and depend on application URL references. Almost all Angular projects should use the default HTML5 style. It produces URLs that are easier for users to understand and it preserves the option to do server-side rendering. Rendering critical pages on the server is a technique that can greatly improve perceived responsiveness when the application first loads. An application that would otherwise take ten or more seconds to start could be rendered on the server and delivered to the user's device in less than a second. This option is only available if application URLs look like normal web URLs without hash (`#`) characters in the middle. `<base href>` ------------- The router uses the browser's [history.pushState](https://developer.mozilla.org/docs/Web/API/History_API/Working_with_the_History_API#adding_and_modifying_history_entries "HTML5 browser history push-state") for navigation. `pushState` lets you customize in-application URL paths; for example, `localhost:4200/crisis-center`. The in-application URLs can be indistinguishable from server URLs. Modern HTML5 browsers were the first to support `pushState` which is why many people refer to these URLs as "HTML5 style" URLs. > HTML5 style navigation is the router default. In the [LocationStrategy and browser URL styles](router#browser-url-styles) section, learn why HTML5 style is preferable, how to adjust its behavior, and how to switch to the older hash (`#`) style, if necessary. > > You must add a [`<base href>` element](https://developer.mozilla.org/docs/Web/HTML/Element/base "base href") to the application's `index.html` for `pushState` routing to work. The browser uses the `<base href>` value to prefix relative URLs when referencing CSS files, scripts, and images. Add the `<base>` element just after the `<head>` tag. If the `app` folder is the application root, as it is for this application, set the `href` value in `index.html` as shown here. ``` <base href="/"> ``` ### HTML5 URLs and the `<base href>` The guidelines that follow will refer to different parts of a URL. This diagram outlines what those parts refer to: ``` foo://example.com:8042/over/there?name=ferret#nose \_/ \______________/\_________/ \_________/ \__/ | | | | | scheme authority path query fragment ``` While the router uses the [HTML5 pushState](https://developer.mozilla.org/docs/Web/API/History_API#Adding_and_modifying_history_entries "Browser history push-state") style by default, you must configure that strategy with a `<base href>`. The preferred way to configure the strategy is to add a [`<base href>` element](https://developer.mozilla.org/docs/Web/HTML/Element/base "base href") tag in the `<head>` of the `index.html`. ``` <base href="/"> ``` Without that tag, the browser might not be able to load resources (images, CSS, scripts) when "deep linking" into the application. Some developers might not be able to add the `<base>` element, perhaps because they don't have access to `<head>` or the `index.html`. Those developers can still use HTML5 URLs by taking the following two steps: 1. Provide the router with an appropriate `[APP\_BASE\_HREF](../api/common/app_base_href)` value. 2. Use root URLs (URLs with an `authority`) for all web resources: CSS, images, scripts, and template HTML files. * The `<base href>` `path` should end with a "/", as browsers ignore characters in the `path` that follow the right-most "`/`" * If the `<base href>` includes a `[query](../api/animations/query)` part, the `[query](../api/animations/query)` is only used if the `path` of a link in the page is empty and has no `[query](../api/animations/query)`. This means that a `[query](../api/animations/query)` in the `<base href>` is only included when using `[HashLocationStrategy](../api/common/hashlocationstrategy)`. * If a link in the page is a root URL (has an `authority`), the `<base href>` is not used. In this way, an `[APP\_BASE\_HREF](../api/common/app_base_href)` with an authority will cause all links created by Angular to ignore the `<base href>` value. * A fragment in the `<base href>` is *never* persisted For more complete information on how `<base href>` is used to construct target URIs, see the [RFC](https://tools.ietf.org/html/rfc3986#section-5.2.2) section on transforming references. ### `[HashLocationStrategy](../api/common/hashlocationstrategy)` Use `[HashLocationStrategy](../api/common/hashlocationstrategy)` by providing the `useHash: true` in an object as the second argument of the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` in the `AppModule`. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { Routes, RouterModule } from '@angular/router'; import { AppComponent } from './app.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; const routes: Routes = [ ]; @NgModule({ imports: [ BrowserModule, FormsModule, RouterModule.forRoot(routes, { useHash: true }) // .../#/crisis-center/ ], declarations: [ AppComponent, PageNotFoundComponent ], providers: [ ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular NgModule API NgModule API ============ At a high level, NgModules are a way to organize Angular applications and they accomplish this through the metadata in the `@[NgModule](../api/core/ngmodule)` decorator. The metadata falls into three categories: | Category | Details | | --- | --- | | Static | Compiler configuration which tells the compiler about directive selectors and where in templates the directives should be applied through selector matching. This is configured using the `declarations` array. | | Runtime | Injector configuration using the `providers` array. | | Composability / Grouping | Bringing NgModules together and making them available using the `imports` and `exports` arrays. | ``` @NgModule({ // Static, that is compiler configuration declarations: [], // Configure the selectors // Runtime, or injector configuration providers: [], // Runtime injector configuration // Composability / Grouping imports: [], // composing NgModules together exports: [] // making NgModules available to other parts of the app }) ``` `@[NgModule](../api/core/ngmodule)` metadata --------------------------------------------- The following table summarizes the `@[NgModule](../api/core/ngmodule)` metadata properties. | Property | Details | | --- | --- | | `declarations` | A list of [declarable](ngmodule-faq#q-declarable) classes (*components*, *directives*, and *pipes*) that *belong to this module*. 1. When compiling a template, you need to determine a set of selectors which should be used for triggering their corresponding directives. 2. The template is compiled within the context of an NgModule —the NgModule within which the template's component is declared— which determines the set of selectors using the following rules: * All selectors of directives listed in `declarations`. * All selectors of directives exported from imported NgModules. Components, directives, and pipes must belong to *exactly* one module. The compiler emits an error if you try to declare the same class in more than one module. Be careful not to re-declare a class that is imported directly or indirectly from another module. | | `providers` | A list of dependency-injection providers. Angular registers these providers with the NgModule's injector. If it is the NgModule used for bootstrapping then it is the root injector. These services become available for injection into any component, directive, pipe or service which is a child of this injector. A lazy-loaded module has its own injector which is typically a child of the application root injector. Lazy-loaded services are scoped to the lazy module's injector. If a lazy-loaded module also provides the `UserService`, any component created within that module's context (such as by router navigation) gets the local instance of the service, not the instance in the root application injector. Components in external modules continue to receive the instance provided by their injectors. For more information on injector hierarchy and scoping, see [Providers](providers) and the [DI Guide](dependency-injection). | | `imports` | A list of modules which should be folded into this module. Folded means it is as if all the imported NgModule's exported properties were declared here. Specifically, it is as if the list of modules whose exported components, directives, or pipes are referenced by the component templates were declared in this module. A component template can [reference](ngmodule-faq#q-template-reference) another component, directive, or pipe when the reference is declared in this module or if the imported module has exported it. For example, a component can use the `[NgIf](../api/common/ngif)` and `[NgFor](../api/common/ngfor)` directives only if the module has imported the Angular `[CommonModule](../api/common/commonmodule)` (perhaps indirectly by importing `[BrowserModule](../api/platform-browser/browsermodule)`). You can import many standard directives from the `[CommonModule](../api/common/commonmodule)` but some familiar directives belong to other modules. For example, you can use `[([ngModel](../api/forms/ngmodel))]` only after importing the Angular `[FormsModule](../api/forms/formsmodule)`. | | `exports` | A list of declarations —*component*, *directive*, and *pipe* classes— that an importing module can use. Exported declarations are the module's *public API*. A component in another module can [use](ngmodule-faq#q-template-reference) *this* module's `UserComponent` if it imports this module and this module exports `UserComponent`. Declarations are private by default. If this module does *not* export `UserComponent`, then only the components within *this* module can use `UserComponent`. Importing a module does *not* automatically re-export the imported module's imports. Module 'B' can't use `[ngIf](../api/common/ngif)` just because it imported module 'A' which imported `[CommonModule](../api/common/commonmodule)`. Module 'B' must import `[CommonModule](../api/common/commonmodule)` itself. A module can list another module among its `exports`, in which case all of that module's public components, directives, and pipes are exported. [Re-export](ngmodule-faq#q-reexport) makes module transitivity explicit. If Module 'A' re-exports `[CommonModule](../api/common/commonmodule)` and Module 'B' imports Module 'A', Module 'B' components can use `[ngIf](../api/common/ngif)` even though 'B' itself didn't import `[CommonModule](../api/common/commonmodule)`. | | `bootstrap` | A list of components that are automatically bootstrapped. Usually there's only one component in this list, the *root component* of the application. Angular can launch with multiple bootstrap components, each with its own location in the host web page. | More on NgModules ----------------- You may also be interested in the following: * [Feature Modules](feature-modules) * [Entry Components](entry-components) * [Providers](providers) * [Types of Feature Modules](module-types) Last reviewed on Mon Feb 28 2022 angular TypeScript configuration TypeScript configuration ======================== TypeScript is a primary language for Angular application development. It is a superset of JavaScript with design-time support for type safety and tooling. Browsers can't execute TypeScript directly. Typescript must be "transpiled" into JavaScript using the *tsc* compiler, which requires some configuration. This page covers some aspects of TypeScript configuration and the TypeScript environment that are important to Angular developers, including details about the following files: | Files | Details | | --- | --- | | [tsconfig.json](typescript-configuration#tsconfig) | TypeScript compiler configuration. | | [typings](typescript-configuration#typings) | TypesScript declaration files. | Configuration files ------------------- A given Angular workspace contains several TypeScript configuration files. At the root `tsconfig.json` file specifies the base TypeScript and Angular compiler options that all projects in the workspace inherit. > See the [Angular compiler options](angular-compiler-options) guide for information about what Angular specific options are available. > > The TypeScript and Angular have a wide range of options which can be used to configure type-checking features and generated output. For more information, see the [Configuration inheritance with extends](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html#configuration-inheritance-with-extends) section of the TypeScript documentation. > For more information TypeScript configuration files, see the official [TypeScript handbook](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html). For details about configuration inheritance, see the [Configuration inheritance with extends](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html#configuration-inheritance-with-extends) section. > > The initial `tsconfig.json` for an Angular workspace typically looks like the following example. ``` /* To learn more about this file see: https://angular.io/config/tsconfig. */ { "compileOnSave": false, "compilerOptions": { "baseUrl": "./", "forceConsistentCasingInFileNames": true, "strict": true, "noImplicitOverride": true, "noPropertyAccessFromIndexSignature": true, "noImplicitReturns": true, "noFallthroughCasesInSwitch": true, "sourceMap": true, "declaration": false, "downlevelIteration": true, "experimentalDecorators": true, "moduleResolution": "node", "importHelpers": true, "target": "ES2022", "module": "ES2022", "useDefineForClassFields": false, "lib": [ "ES2022", "dom" ] }, "angularCompilerOptions": { "enableI18nLegacyMessageIdFormat": false, "strictInjectionParameters": true, "strictInputAccessModifiers": true, "strictTemplates": true } } ``` ### `noImplicitAny` and `suppressImplicitAnyIndexErrors` TypeScript developers disagree about whether the `noImplicitAny` flag should be `true` or `false`. There is no correct answer and you can change the flag later. But your choice now can make a difference in larger projects, so it merits discussion. When the `noImplicitAny` flag is `false` (the default), and if the compiler cannot infer the variable type based on how it's used, the compiler silently defaults the type to `any`. That's what is meant by *implicit `any`*. When the `noImplicitAny` flag is `true` and the TypeScript compiler cannot infer the type, it still generates the JavaScript files, but it also **reports an error**. Many seasoned developers prefer this stricter setting because type checking catches more unintentional errors at compile time. You can set a variable's type to `any` even when the `noImplicitAny` flag is `true`. When the `noImplicitAny` flag is `true`, you may get *implicit index errors* as well. Most developers feel that *this particular error* is more annoying than helpful. You can suppress them with the following additional flag: ``` "suppressImplicitAnyIndexErrors": true ``` > For more information about how the TypeScript configuration affects compilation, see [Angular Compiler Options](angular-compiler-options) and [Template Type Checking](template-typecheck). > > TypeScript typings ------------------ Many JavaScript libraries, such as jQuery, the Jasmine testing library, and Angular, extend the JavaScript environment with features and syntax that the TypeScript compiler doesn't recognize natively. When the compiler doesn't recognize something, it reports an error. Use [TypeScript type definition files](https://www.typescriptlang.org/docs/handbook/writing-declaration-files.html) —`d.ts files`— to tell the compiler about the libraries you load. TypeScript-aware editors leverage these same definition files to display type information about library features. Many libraries include definition files in their npm packages where both the TypeScript compiler and editors can find them. Angular is one such library. The `node_modules/@angular/core/` folder of any Angular application contains several `d.ts` files that describe parts of Angular. > You don't need to do anything to get *typings* files for library packages that include `d.ts` files. Angular packages include them already. > > ### `lib` TypeScript includes a default set of declaration files. These files contain the ambient declarations for various common JavaScript constructs present in JavaScript runtimes and the DOM. For more information, see [lib](https://www.typescriptlang.org/tsconfig#lib) in the TypeScript guide. ### Installable typings files Many libraries —jQuery, Jasmine, and Lodash among them— do *not* include `d.ts` files in their npm packages. Fortunately, either their authors or community contributors have created separate `d.ts` files for these libraries and published them in well-known locations. You can install these typings with `npm` using the [`@types/*` scoped package](https://www.typescriptlang.org/docs/handbook/declaration-files/consumption.html). Which ambient declaration files in `@types/*` are automatically included is determined by the [`types` TypeScript compiler option](https://www.typescriptlang.org/tsconfig#types). The Angular CLI generates a `tsconfig.app.json` file which is used to build an application, in which the `types` compiler option is set to `[]` to disable automatic inclusion of declarations from `@types/*`. Similarly, the `tsconfig.spec.json` file is used for testing and sets `"types": ["jasmine"]` to allow using Jasmine's ambient declarations in tests. After installing `@types/*` declarations, you have to update the `tsconfig.app.json` and `tsconfig.spec.json` files to add the newly installed declarations to the list of `types`. If the declarations are only meant for testing, then only the `tsconfig.spec.json` file should be updated. For instance, to install typings for `chai` you run `npm install @types/chai --save-dev` and then update `tsconfig.spec.json` to add `"chai"` to the list of `types`. ### `target` By default, the target is `ES2022`. To control ECMA syntax use the [Browserslist](https://github.com/browserslist/browserslist) configuration file. For more information, see the [configuring browser compatibility](build#configuring-browser-compatibility) guide. Last reviewed on Mon Oct 24 2022 angular Frequently-used modules Frequently-used modules ======================= An Angular application needs at least one module that serves as the root module. As you add features to your app, you can add them in modules. The following are frequently used Angular modules with examples of some of the things they contain: | NgModule | Import it from | Why you use it | | --- | --- | --- | | `[BrowserModule](../api/platform-browser/browsermodule)` | `@angular/platform-browser` | To run your application in a browser. | | `[CommonModule](../api/common/commonmodule)` | `@angular/common` | To use `[NgIf](../api/common/ngif)` and `[NgFor](../api/common/ngfor)`. | | `[FormsModule](../api/forms/formsmodule)` | `@angular/forms` | To build template driven forms (includes `[NgModel](../api/forms/ngmodel)`). | | `[ReactiveFormsModule](../api/forms/reactiveformsmodule)` | `@angular/forms` | To build reactive forms. | | `[RouterModule](../api/router/routermodule)` | `@angular/router` | To use `[RouterLink](../api/router/routerlink)`, `.forRoot()`, and `.forChild()`. | | `[HttpClientModule](../api/common/http/httpclientmodule)` | `@angular/common/[http](../api/common/http)` | To communicate with a server using the HTTP protocol. | Importing modules ----------------- When you use these Angular modules, import them in `AppModule`, or your feature module as appropriate, and list them in the `@[NgModule](../api/core/ngmodule)` `imports` array. For example, in the basic application generated by the [Angular CLI](cli), `[BrowserModule](../api/platform-browser/browsermodule)` is the first import at the top of the `AppModule`, `app.module.ts`. ``` /* import modules so that AppModule can access them */ import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ /* add modules here so Angular knows to use them */ BrowserModule, ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` The imports at the top of the array are JavaScript import statements while the `imports` array within `@[NgModule](../api/core/ngmodule)` is Angular specific. For more information on the difference, see [JavaScript Modules vs. NgModules](ngmodule-vs-jsmodule). `[BrowserModule](../api/platform-browser/browsermodule)` and `[CommonModule](../api/common/commonmodule)` ---------------------------------------------------------------------------------------------------------- `[BrowserModule](../api/platform-browser/browsermodule)` imports `[CommonModule](../api/common/commonmodule)`, which contributes many common directives such as `[ngIf](../api/common/ngif)` and `[ngFor](../api/common/ngfor)`. Additionally, `[BrowserModule](../api/platform-browser/browsermodule)` re-exports `[CommonModule](../api/common/commonmodule)` making all of its directives available to any module that imports `[BrowserModule](../api/platform-browser/browsermodule)`. For applications that run in the browser, import `[BrowserModule](../api/platform-browser/browsermodule)` in the root `AppModule` because it provides services that are essential to launch and run a browser application. `[BrowserModule](../api/platform-browser/browsermodule)`'s providers are for the whole application so it should only be in the root module, not in feature modules. Feature modules only need the common directives in `[CommonModule](../api/common/commonmodule)`; they don't need to re-install app-wide providers. If you do import `[BrowserModule](../api/platform-browser/browsermodule)` into a lazy loaded feature module, Angular returns an error telling you to use `[CommonModule](../api/common/commonmodule)` instead. More on NgModules ----------------- You may also be interested in the following: * [Bootstrapping](bootstrapping) * [NgModules](ngmodules) * [JavaScript Modules vs. NgModules](ngmodule-vs-jsmodule) Last reviewed on Mon Feb 28 2022 angular Router tutorial: tour of heroes Router tutorial: tour of heroes =============================== This tutorial provides an extensive overview of the Angular router. In this tutorial, you build upon a basic router configuration to explore features such as child routes, route parameters, lazy load NgModules, guard routes, and preloading data to improve the user experience. For a working example of the final version of the app, see the . Objectives ---------- This guide describes development of a multi-page routed sample application. Along the way, it highlights key features of the router such as: * Organizing the application features into modules * Navigating to a component (*Heroes* link to "Heroes List") * Including a route parameter (passing the Hero `id` while routing to the "Hero Detail") * Child routes (the *Crisis Center* has its own routes) * The `canActivate` guard (checking route access) * The `canActivateChild` guard (checking child route access) * The `canDeactivate` guard (ask permission to discard unsaved changes) * The `resolve` guard (pre-fetching route data) * Lazy loading an `[NgModule](../api/core/ngmodule)` * The `canMatch` guard (check before loading feature module assets) This guide proceeds as a sequence of milestones as if you were building the application step-by-step, but assumes you are familiar with basic [Angular concepts](architecture). For a general introduction to angular, see the [Getting Started](start). For a more in-depth overview, see the [Tour of Heroes](../tutorial/tour-of-heroes) tutorial. Prerequisites ------------- To complete this tutorial, you should have a basic understanding of the following concepts: * JavaScript * HTML * CSS * [Angular CLI](cli) You might find the [Tour of Heroes tutorial](../tutorial/tour-of-heroes) helpful, but it is not required. The sample application in action -------------------------------- The sample application for this tutorial helps the Hero Employment Agency find crises for heroes to solve. The application has three main feature areas: 1. A *Crisis Center* for maintaining the list of crises for assignment to heroes. 2. A *Heroes* area for maintaining the list of heroes employed by the agency. 3. An *Admin* area to manage the list of crises and heroes. Try it by clicking on this live example link. The application renders with a row of navigation buttons and the *Heroes* view with its list of heroes. Select one hero and the application takes you to a hero editing screen. Alter the name. Click the "Back" button and the application returns to the heroes list which displays the changed hero name. Notice that the name change took effect immediately. Had you clicked the browser's back button instead of the application's "Back" button, the application would have returned you to the heroes list as well. Angular application navigation updates the browser history as normal web navigation does. Now click the *Crisis Center* link for a list of ongoing crises. Select a crisis and the application takes you to a crisis editing screen. The *Crisis Detail* appears in a child component on the same page, beneath the list. Alter the name of a crisis. Notice that the corresponding name in the crisis list does *not* change. Unlike *Hero Detail*, which updates as you type, *Crisis Detail* changes are temporary until you either save or discard them by pressing the "Save" or "Cancel" buttons. Both buttons navigate back to the *Crisis Center* and its list of crises. Click the browser back button or the "Heroes" link to activate a dialog. You can say "OK" and lose your changes or click "Cancel" and continue editing. Behind this behavior is the router's `canDeactivate` guard. The guard gives you a chance to clean up or ask the user's permission before navigating away from the current view. The `Admin` and `Login` buttons illustrate other router capabilities covered later in the guide. Milestone 1: Getting started ---------------------------- Begin with a basic version of the application that navigates between two empty views. ### Create a sample application 1. Create a new Angular project, *angular-router-tour-of-heroes*. ``` ng new angular-router-tour-of-heroes ``` When prompted with `Would you like to add Angular routing?`, select `N`. When prompted with `Which stylesheet format would you like to use?`, select `CSS`. After a few moments, a new project, `angular-router-tour-of-heroes`, is ready. 2. From your terminal, navigate to the `angular-router-tour-of-heroes` directory. 3. Verify that your new application runs as expected by running the `ng serve` command. ``` ng serve ``` 4. Open a browser to `http://localhost:4200`. You should see the application running in your browser. ### Define Routes A router must be configured with a list of route definitions. Each definition translates to a [Route](../api/router/route) object which has two things: a `path`, the URL path segment for this route; and a `component`, the component associated with this route. The router draws upon its registry of definitions when the browser URL changes or when application code tells the router to navigate along a route path. The first route does the following: * When the browser's location URL changes to match the path segment `/crisis-center`, then the router activates an instance of the `CrisisListComponent` and displays its view * When the application requests navigation to the path `/crisis-center`, the router activates an instance of `CrisisListComponent`, displays its view, and updates the browser's address location and history with the URL for that path The first configuration defines an array of two routes with minimal paths leading to the `CrisisListComponent` and `HeroListComponent`. Generate the `CrisisList` and `HeroList` components so that the router has something to render. ``` ng generate component crisis-list ``` ``` ng generate component hero-list ``` Replace the contents of each component with the following sample HTML. ``` <h2>CRISIS CENTER</h2> <p>Get your crisis here</p> ``` ``` <h2>HEROES</h2> <p>Get your heroes here</p> ``` ### Register `[Router](../api/router/router)` and `[Routes](../api/router/routes)` To use the `[Router](../api/router/router)`, you must first register the `[RouterModule](../api/router/routermodule)` from the `@angular/router` package. Define an array of routes, `appRoutes`, and pass them to the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method. The `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method returns a module that contains the configured `[Router](../api/router/router)` service provider, plus other providers that the routing library requires. Once the application is bootstrapped, the `[Router](../api/router/router)` performs the initial navigation based on the current browser URL. > **NOTE**: The `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method is a pattern used to register application-wide providers. Read more about application-wide providers in the [Singleton services](singleton-services#forRoot-router) guide. > > ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { RouterModule, Routes } from '@angular/router'; import { AppComponent } from './app.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { HeroListComponent } from './hero-list/hero-list.component'; const appRoutes: Routes = [ { path: 'crisis-center', component: CrisisListComponent }, { path: 'heroes', component: HeroListComponent }, ]; @NgModule({ imports: [ BrowserModule, FormsModule, RouterModule.forRoot( appRoutes, { enableTracing: true } // <-- debugging purposes only ) ], declarations: [ AppComponent, HeroListComponent, CrisisListComponent, ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` > Adding the configured `[RouterModule](../api/router/routermodule)` to the `AppModule` is sufficient for minimal route configurations. However, as the application grows, [refactor the routing configuration](router-tutorial-toh#refactor-the-routing-configuration-into-a-routing-module) into a separate file and create a [Routing Module](router-tutorial-toh#routing-module). A routing module is a special type of `Service Module` dedicated to routing. > > Registering the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` in the `AppModule` `imports` array makes the `[Router](../api/router/router)` service available everywhere in the application. ### Add the Router Outlet The root `AppComponent` is the application shell. It has a title, a navigation bar with two links, and a router outlet where the router renders components. The router outlet serves as a placeholder where the routed components are rendered. The corresponding component template looks like this: ``` <h1>Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/heroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> </nav> <router-outlet></router-outlet> ``` ### Define a Wildcard route You've created two routes in the application so far, one to `/crisis-center` and the other to `/heroes`. Any other URL causes the router to throw an error and crash the app. Add a wildcard route to intercept invalid URLs and handle them gracefully. A wildcard route has a path consisting of two asterisks. It matches every URL. Thus, the router selects this wildcard route if it can't match a route earlier in the configuration. A wildcard route can navigate to a custom "404 Not Found" component or [redirect](router-tutorial-toh#redirect) to an existing route. > The router selects the route with a [*first match wins*](router-reference#example-config) strategy. Because a wildcard route is the least specific route, place it last in the route configuration. > > To test this feature, add a button with a `[RouterLink](../api/router/routerlink)` to the `HeroListComponent` template and set the link to a non-existent route called `"/sidekicks"`. ``` <h2>HEROES</h2> <p>Get your heroes here</p> <button type="button" routerLink="/sidekicks">Go to sidekicks</button> ``` The application fails if the user clicks that button because you haven't defined a `"/sidekicks"` route yet. Instead of adding the `"/sidekicks"` route, define a `wildcard` route and have it navigate to a `PageNotFoundComponent`. ``` { path: '**', component: PageNotFoundComponent } ``` Create the `PageNotFoundComponent` to display when users visit invalid URLs. ``` ng generate component page-not-found ``` ``` <h2>Page not found</h2> ``` Now when the user visits `/sidekicks`, or any other invalid URL, the browser displays "Page not found". The browser address bar continues to point to the invalid URL. ### Set up redirects When the application launches, the initial URL in the browser bar is by default: ``` localhost:4200 ``` That doesn't match any of the hard-coded routes which means the router falls through to the wildcard route and displays the `PageNotFoundComponent`. The application needs a default route to a valid page. The default page for this application is the list of heroes. The application should navigate there as if the user clicked the "Heroes" link or pasted `localhost:4200/heroes` into the address bar. Add a `redirect` route that translates the initial relative URL (`''`) to the default path (`/heroes`) you want. Add the default route somewhere *above* the wildcard route. It's just above the wildcard route in the following excerpt showing the complete `appRoutes` for this milestone. ``` const appRoutes: Routes = [ { path: 'crisis-center', component: CrisisListComponent }, { path: 'heroes', component: HeroListComponent }, { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; ``` The browser address bar shows `.../heroes` as if you'd navigated there directly. A redirect route requires a `pathMatch` property to tell the router how to match a URL to the path of a route. In this app, the router should select the route to the `HeroListComponent` only when the *entire URL* matches `''`, so set the `pathMatch` value to `'full'`. Technically, `pathMatch = 'full'` results in a route hit when the *remaining*, unmatched segments of the URL match `''`. In this example, the redirect is in a top level route so the *remaining* URL and the *entire* URL are the same thing. The other possible `pathMatch` value is `'prefix'` which tells the router to match the redirect route when the remaining URL begins with the redirect route's prefix path. This doesn't apply to this sample application because if the `pathMatch` value were `'prefix'`, every URL would match `''`. Try setting it to `'prefix'` and clicking the `Go to sidekicks` button. Because that's a bad URL, you should see the "Page not found" page. Instead, you're still on the "Heroes" page. Enter a bad URL in the browser address bar. You're instantly re-routed to `/heroes`. Every URL, good or bad, that falls through to this route definition is a match. The default route should redirect to the `HeroListComponent` only when the entire url is `''`. Remember to restore the redirect to `pathMatch = 'full'`. Learn more in Victor Savkin's [post on redirects](https://vsavkin.tumblr.com/post/146722301646/angular-router-empty-paths-componentless-routes). ### Milestone 1 wrap up Your sample application can switch between two views when the user clicks a link. Milestone 1 covered how to do the following: * Load the router library * Add a nav bar to the shell template with anchor tags, `[routerLink](../api/router/routerlink)` and `[routerLinkActive](../api/router/routerlinkactive)` directives * Add a `[router-outlet](../api/router/routeroutlet)` to the shell template where views are displayed * Configure the router module with `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` * Set the router to compose HTML5 browser URLs * Handle invalid routes with a `wildcard` route * Navigate to the default route when the application launches with an empty path The starter application's structure looks like this: ``` angular-router-tour-of-heroes src app crisis-list crisis-list.component.css crisis-list.component.html crisis-list.component.ts hero-list hero-list.component.css hero-list.component.html hero-list.component.ts page-not-found page-not-found.component.css page-not-found.component.html page-not-found.component.ts app.component.css app.component.html app.component.ts app.module.ts main.ts index.html styles.css tsconfig.json node_modules … package.json ``` Here are the files in this milestone. ``` <h1>Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/heroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> </nav> <router-outlet></router-outlet> ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { RouterModule, Routes } from '@angular/router'; import { AppComponent } from './app.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { HeroListComponent } from './hero-list/hero-list.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; const appRoutes: Routes = [ { path: 'crisis-center', component: CrisisListComponent }, { path: 'heroes', component: HeroListComponent }, { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ BrowserModule, FormsModule, RouterModule.forRoot( appRoutes, { enableTracing: true } // <-- debugging purposes only ) ], declarations: [ AppComponent, HeroListComponent, CrisisListComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` <h2>HEROES</h2> <p>Get your heroes here</p> <button type="button" routerLink="/sidekicks">Go to sidekicks</button> ``` ``` <h2>CRISIS CENTER</h2> <p>Get your crisis here</p> ``` ``` <h2>Page not found</h2> ``` ``` <html lang="en"> <head> <!-- Set the base href --> <base href="/"> <title>Angular Router</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <app-root></app-root> </body> </html> ``` Milestone 2: `Routing module` ----------------------------- This milestone shows you how to configure a special-purpose module called a *Routing Module*, which holds your application's routing configuration. The Routing Module has several characteristics: * Separates routing concerns from other application concerns * Provides a module to replace or remove when testing the application * Provides a well-known location for routing service providers such as guards and resolvers * Does not declare components ### Integrate routing with your app The sample routing application does not include routing by default. When you use the [Angular CLI](cli) to create a project that does use routing, set the `--routing` option for the project or application, and for each NgModule. When you create or initialize a new project (using the CLI [`ng new`](cli/new) command) or a new application (using the [`ng generate app`](cli/generate) command), specify the `--routing` option. This tells the CLI to include the `@angular/router` npm package and create a file named `app-routing.module.ts`. You can then use routing in any NgModule that you add to the project or application. For example, the following command generates an NgModule that can use routing. ``` ng generate module my-module --routing ``` This creates a separate file named `my-module-routing.module.ts` to store the NgModule's routes. The file includes an empty `[Routes](../api/router/routes)` object that you can fill with routes to different components and NgModules. ### Refactor the routing configuration into a routing module Create an `AppRouting` module in the `/app` folder to contain the routing configuration. ``` ng generate module app-routing --module app --flat ``` Import the `CrisisListComponent`, `HeroListComponent`, and `PageNotFoundComponent` symbols like you did in the `app.module.ts`. Then move the `[Router](../api/router/router)` imports and routing configuration, including `[RouterModule.forRoot()](../api/router/routermodule#forRoot)`, into this routing module. Re-export the Angular `[RouterModule](../api/router/routermodule)` by adding it to the module `exports` array. By re-exporting the `[RouterModule](../api/router/routermodule)` here, the components declared in `AppModule` have access to router directives such as `[RouterLink](../api/router/routerlink)` and `[RouterOutlet](../api/router/routeroutlet)`. After these steps, the file should look like this. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { HeroListComponent } from './hero-list/hero-list.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; const appRoutes: Routes = [ { path: 'crisis-center', component: CrisisListComponent }, { path: 'heroes', component: HeroListComponent }, { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot( appRoutes, { enableTracing: true } // <-- debugging purposes only ) ], exports: [ RouterModule ] }) export class AppRoutingModule {} ``` Next, update the `app.module.ts` file by removing `RouterModule.forRoot` in the `imports` array. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { AppRoutingModule } from './app-routing.module'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { HeroListComponent } from './hero-list/hero-list.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; @NgModule({ imports: [ BrowserModule, FormsModule, AppRoutingModule ], declarations: [ AppComponent, HeroListComponent, CrisisListComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` > Later, this guide shows you how to create [multiple routing modules](router-tutorial-toh#heroes-functionality) and import those routing modules [in the correct order](router-tutorial-toh#routing-module-order). > > The application continues to work just the same, and you can use `AppRoutingModule` as the central place to maintain future routing configuration. ### Benefits of a routing module The routing module, often called the `AppRoutingModule`, replaces the routing configuration in the root or feature module. The routing module is helpful as your application grows and when the configuration includes specialized guard and resolver functions. Some developers skip the routing module when the configuration is minimal and merge the routing configuration directly into the companion module (for example, `AppModule`). Most applications should implement a routing module for consistency. It keeps the code clean when configuration becomes complex. It makes testing the feature module easier. Its existence calls attention to the fact that a module is routed. It is where developers expect to find and expand routing configuration. Milestone 3: Heroes feature --------------------------- This milestone covers the following: * Organizing the application and routes into feature areas using modules * Navigating imperatively from one component to another * Passing required and optional information in route parameters This sample application recreates the heroes feature in the "Services" section of the [Tour of Heroes tutorial](../tutorial/tour-of-heroes/toh-pt4 "Tour of Heroes: Services"), and reuses much of the code from the . A typical application has multiple feature areas, each dedicated to a particular business purpose with its own folder. This section shows you how refactor the application into different feature modules, import them into the main module and navigate among them. ### Add heroes functionality Follow these steps: * To manage the heroes, create a `HeroesModule` with routing in the heroes folder and register it with the root `AppModule`. ``` ng generate module heroes/heroes --module app --flat --routing ``` * Move the placeholder `hero-list` folder that's in the `app` folder into the `heroes` folder. * Copy the contents of the `heroes/heroes.component.html` from the "Services" tutorial into the `hero-list.component.html` template. + Re-label the `<h2>` to `<h2>HEROES</h2>`. + Delete the `<app-hero-detail>` component at the bottom of the template. * Copy the contents of the `heroes/heroes.component.css` from the live example into the `hero-list.component.css` file. * Copy the contents of the `heroes/heroes.component.ts` from the live example into the `hero-list.component.ts` file. + Change the component class name to `HeroListComponent`. + Change the `selector` to `app-hero-list`. > Selectors are not required for routed components because components are dynamically inserted when the page is rendered. However, they are useful for identifying and targeting them in your HTML element tree. > > * Copy the `hero-detail` folder, the `hero.ts`, `hero.service.ts`, and `mock-heroes.ts` files into the `heroes` sub-folder * Copy the `message.service.ts` into the `src/app` folder * Update the relative path import to the `message.service` in the `hero.service.ts` file Next, update the `HeroesModule` metadata. * Import and add the `HeroDetailComponent` and `HeroListComponent` to the `declarations` array in the `HeroesModule`. ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { HeroListComponent } from './hero-list/hero-list.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; import { HeroesRoutingModule } from './heroes-routing.module'; @NgModule({ imports: [ CommonModule, FormsModule, HeroesRoutingModule ], declarations: [ HeroListComponent, HeroDetailComponent ] }) export class HeroesModule {} ``` The hero management file structure is as follows: ``` src/app/heroes hero-detail hero-detail.component.css hero-detail.component.html hero-detail.component.ts hero-list hero-list.component.css hero-list.component.html hero-list.component.ts hero.service.ts hero.ts heroes-routing.module.ts heroes.module.ts mock-heroes.ts ``` #### Hero feature routing requirements The heroes feature has two interacting components, the hero list and the hero detail. When you navigate to list view, it gets a list of heroes and displays them. When you click on a hero, the detail view has to display that particular hero. You tell the detail view which hero to display by including the selected hero's ID in the route URL. Import the hero components from their new locations in the `src/app/heroes/` folder and define the two hero routes. Now that you have routes for the `Heroes` module, register them with the `[Router](../api/router/router)` using the `[RouterModule](../api/router/routermodule)` as you did in the `AppRoutingModule`, with an important difference. In the `AppRoutingModule`, you used the static `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method to register the routes and application level service providers. In a feature module you use the static `forChild()` method. > Only call `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` in the root `AppRoutingModule` (or the `AppModule` if that's where you register top level application routes). In any other module, you must call the `[RouterModule.forChild()](../api/router/routermodule#forChild)` method to register additional routes. > > The updated `HeroesRoutingModule` looks like this: ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { HeroListComponent } from './hero-list/hero-list.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; const heroesRoutes: Routes = [ { path: 'heroes', component: HeroListComponent }, { path: 'hero/:id', component: HeroDetailComponent } ]; @NgModule({ imports: [ RouterModule.forChild(heroesRoutes) ], exports: [ RouterModule ] }) export class HeroesRoutingModule { } ``` > Consider giving each feature module its own route configuration file. Though the feature routes are currently minimal, routes have a tendency to grow more complex even in small applications. > > #### Remove duplicate hero routes The hero routes are currently defined in two places: in the `HeroesRoutingModule`, by way of the `HeroesModule`, and in the `AppRoutingModule`. Routes provided by feature modules are combined together into their imported module's routes by the router. This lets you continue defining the feature module routes without modifying the main route configuration. Remove the `HeroListComponent` import and the `/heroes` route from the `app-routing.module.ts`. Leave the default and the wildcard routes as these are still in use at the top level of the application. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; // import { HeroListComponent } from './hero-list/hero-list.component'; // <-- delete this line import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; const appRoutes: Routes = [ { path: 'crisis-center', component: CrisisListComponent }, // { path: 'heroes', component: HeroListComponent }, // <-- delete this line { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot( appRoutes, { enableTracing: true } // <-- debugging purposes only ) ], exports: [ RouterModule ] }) export class AppRoutingModule {} ``` #### Remove heroes declarations Because the `HeroesModule` now provides the `HeroListComponent`, remove it from the `AppModule`'s `declarations` array. Now that you have a separate `HeroesModule`, you can evolve the hero feature with more components and different routes. After these steps, the `AppModule` should look like this: ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { AppRoutingModule } from './app-routing.module'; import { HeroesModule } from './heroes/heroes.module'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; @NgModule({ imports: [ BrowserModule, FormsModule, HeroesModule, AppRoutingModule ], declarations: [ AppComponent, CrisisListComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ### Module import order Notice that in the module `imports` array, the `AppRoutingModule` is last and comes *after* the `HeroesModule`. ``` imports: [ BrowserModule, FormsModule, HeroesModule, AppRoutingModule ], ``` The order of route configuration is important because the router accepts the first route that matches a navigation request path. When all routes were in one `AppRoutingModule`, you put the default and [wildcard](router-tutorial-toh#wildcard) routes last, after the `/heroes` route, so that the router had a chance to match a URL to the `/heroes` route *before* hitting the wildcard route and navigating to "Page not found". Each routing module augments the route configuration in the order of import. If you listed `AppRoutingModule` first, the wildcard route would be registered *before* the hero routes. The wildcard route —which matches *every* URL— would intercept the attempt to navigate to a hero route. > Reverse the routing modules to see a click of the heroes link resulting in "Page not found". Learn about inspecting the runtime router configuration [below](router-tutorial-toh#inspect-config "Inspect the router config"). > > ### Route Parameters #### Route definition with a parameter Return to the `HeroesRoutingModule` and look at the route definitions again. The route to `HeroDetailComponent` has an `:id` token in the path. ``` { path: 'hero/:id', component: HeroDetailComponent } ``` The `:id` token creates a slot in the path for a Route Parameter. In this case, this configuration causes the router to insert the `id` of a hero into that slot. If you tell the router to navigate to the detail component and display "Magneta", you expect a hero ID to appear in the browser URL like this: ``` localhost:4200/hero/15 ``` If a user enters that URL into the browser address bar, the router should recognize the pattern and go to the same "Magneta" detail view. Embedding the route parameter token, `:id`, in the route definition path is a good choice for this scenario because the `id` is *required* by the `HeroDetailComponent` and because the value `15` in the path clearly distinguishes the route to "Magneta" from a route for some other hero. #### Setting the route parameters in the list view After navigating to the `HeroDetailComponent`, you expect to see the details of the selected hero. You need two pieces of information: the routing path to the component and the hero's `id`. Accordingly, the *link parameters array* has two items: The routing *path* and a *route parameter* that specifies the `id` of the selected hero. ``` <a [routerLink]="['/hero', hero.id]"> ``` The router composes the destination URL from the array like this: `localhost:4200/hero/15`. The router extracts the route parameter (`id:15`) from the URL and supplies it to the `HeroDetailComponent` using the `[ActivatedRoute](../api/router/activatedroute)` service. ### `Activated [Route](../api/router/route)` in action Import the `[Router](../api/router/router)`, `[ActivatedRoute](../api/router/activatedroute)`, and `[ParamMap](../api/router/parammap)` tokens from the router package. ``` import { Router, ActivatedRoute, ParamMap } from '@angular/router'; ``` Import the `switchMap` operator because you need it later to process the `Observable` route parameters. ``` import { switchMap } from 'rxjs/operators'; ``` Add the services as private variables to the constructor so that Angular injects them (makes them visible to the component). ``` constructor( private route: ActivatedRoute, private router: Router, private service: HeroService ) {} ``` In the `ngOnInit()` method, use the `[ActivatedRoute](../api/router/activatedroute)` service to retrieve the parameters for the route, pull the hero `id` from the parameters, and retrieve the hero to display. ``` ngOnInit() { this.hero$ = this.route.paramMap.pipe( switchMap((params: ParamMap) => this.service.getHero(params.get('id')!)) ); } ``` When the map changes, `paramMap` gets the `id` parameter from the changed parameters. Then you tell the `HeroService` to fetch the hero with that `id` and return the result of the `HeroService` request. The `switchMap` operator does two things. It flattens the `Observable<Hero>` that `HeroService` returns and cancels previous pending requests. If the user re-navigates to this route with a new `id` while the `HeroService` is still retrieving the old `id`, `switchMap` discards that old request and returns the hero for the new `id`. `[AsyncPipe](../api/common/asyncpipe)` handles the observable subscription and the component's `hero` property will be (re)set with the retrieved hero. #### `[ParamMap](../api/router/parammap)` API The `[ParamMap](../api/router/parammap)` API is inspired by the [URLSearchParams interface](https://developer.mozilla.org/docs/Web/API/URLSearchParams). It provides methods to handle parameter access for both route parameters (`paramMap`) and query parameters (`queryParamMap`). | Member | Details | | --- | --- | | `has(name)` | Returns `true` if the parameter name is in the map of parameters. | | `get(name)` | Returns the parameter name value (a `string`) if present, or `null` if the parameter name is not in the map. Returns the *first* element if the parameter value is actually an array of values. | | `getAll(name)` | Returns a `string array` of the parameter name value if found, or an empty `array` if the parameter name value is not in the map. Use `getAll` when a single parameter could have multiple values. | | `keys` | Returns a `string array` of all parameter names in the map. | #### Observable `paramMap` and component reuse In this example, you retrieve the route parameter map from an `Observable`. That implies that the route parameter map can change during the lifetime of this component. By default, the router re-uses a component instance when it re-navigates to the same component type without visiting a different component first. The route parameters could change each time. Suppose a parent component navigation bar had "forward" and "back" buttons that scrolled through the list of heroes. Each click navigated imperatively to the `HeroDetailComponent` with the next or previous `id`. You wouldn't want the router to remove the current `HeroDetailComponent` instance from the DOM only to re-create it for the next `id` as this would re-render the view. For better UX, the router re-uses the same component instance and updates the parameter. Because `ngOnInit()` is only called once per component instantiation, you can detect when the route parameters change from *within the same instance* using the observable `paramMap` property. > When subscribing to an observable in a component, you almost always unsubscribe when the component is destroyed. > > However, `[ActivatedRoute](../api/router/activatedroute)` observables are among the exceptions because `[ActivatedRoute](../api/router/activatedroute)` and its observables are insulated from the `[Router](../api/router/router)` itself. The `[Router](../api/router/router)` destroys a routed component when it is no longer needed. This means all the component's members will also be destroyed, including the injected `[ActivatedRoute](../api/router/activatedroute)` and the subscriptions to its `Observable` properties. > > The `[Router](../api/router/router)` does not `complete` any `Observable` of the `[ActivatedRoute](../api/router/activatedroute)` so any `finalize` or `complete` blocks will not run. If you need to handle something in a `finalize`, you still need to unsubscribe in `ngOnDestroy`. You also have to unsubscribe if your observable pipe has a delay with code you do not want to run after the component is destroyed. > > #### `snapshot`: the no-observable alternative This application won't re-use the `HeroDetailComponent`. The user always returns to the hero list to select another hero to view. There's no way to navigate from one hero detail to another hero detail without visiting the list component in between. Therefore, the router creates a new `HeroDetailComponent` instance every time. When you know for certain that a `HeroDetailComponent` instance will never be re-used, you can use `snapshot`. `route.snapshot` provides the initial value of the route parameter map. You can access the parameters directly without subscribing or adding observable operators as in the following: ``` ngOnInit() { const id = this.route.snapshot.paramMap.get('id')!; this.hero$ = this.service.getHero(id); } ``` > `snapshot` only gets the initial value of the parameter map with this technique. Use the observable `paramMap` approach if there's a possibility that the router could re-use the component. This tutorial sample application uses with the observable `paramMap`. > > ### Navigating back to the list component The `HeroDetailComponent` "Back" button uses the `gotoHeroes()` method that navigates imperatively back to the `HeroListComponent`. The router `navigate()` method takes the same one-item *link parameters array* that you can bind to a `[[routerLink](../api/router/routerlink)]` directive. It holds the path to the `HeroListComponent`: ``` gotoHeroes() { this.router.navigate(['/heroes']); } ``` #### Route Parameters: Required or optional? Use [route parameters](router-tutorial-toh#route-parameters) to specify a required parameter value within the route URL as you do when navigating to the `HeroDetailComponent` in order to view the hero with `id` 15: ``` localhost:4200/hero/15 ``` You can also add optional information to a route request. For example, when returning to the `hero-detail.component.ts` list from the hero detail view, it would be nice if the viewed hero were preselected in the list. You implement this feature by including the viewed hero's `id` in the URL as an optional parameter when returning from the `HeroDetailComponent`. Optional information can also include other forms such as: * Loosely structured search criteria; for example, `name='wind*'` * Multiple values; for example, `after='12/31/2015' & before='1/1/2017'` —in no particular order— `before='1/1/2017' & after='12/31/2015'` — in a variety of formats— `during='currentYear'` As these kinds of parameters don't fit smoothly in a URL path, you can use optional parameters for conveying arbitrarily complex information during navigation. Optional parameters aren't involved in pattern matching and afford flexibility of expression. The router supports navigation with optional parameters as well as required route parameters. Define optional parameters in a separate object *after* you define the required route parameters. In general, use a required route parameter when the value is mandatory (for example, if necessary to distinguish one route path from another); and an optional parameter when the value is optional, complex, and/or multivariate. #### Heroes list: optionally selecting a hero When navigating to the `HeroDetailComponent` you specified the required `id` of the hero-to-edit in the route parameter and made it the second item of the [*link parameters array*](router-tutorial-toh#link-parameters-array). ``` <a [routerLink]="['/hero', hero.id]"> ``` The router embedded the `id` value in the navigation URL because you had defined it as a route parameter with an `:id` placeholder token in the route `path`: ``` { path: 'hero/:id', component: HeroDetailComponent } ``` When the user clicks the back button, the `HeroDetailComponent` constructs another *link parameters array* which it uses to navigate back to the `HeroListComponent`. ``` gotoHeroes() { this.router.navigate(['/heroes']); } ``` This array lacks a route parameter because previously you didn't need to send information to the `HeroListComponent`. Now, send the `id` of the current hero with the navigation request so that the `HeroListComponent` can highlight that hero in its list. Send the `id` with an object that contains an optional `id` parameter. For demonstration purposes, there's an extra junk parameter (`foo`) in the object that the `HeroListComponent` should ignore. Here's the revised navigation statement: ``` gotoHeroes(hero: Hero) { const heroId = hero ? hero.id : null; // Pass along the hero id if available // so that the HeroList component can select that hero. // Include a junk 'foo' property for fun. this.router.navigate(['/heroes', { id: heroId, foo: 'foo' }]); } ``` The application still works. Clicking "back" returns to the hero list view. Look at the browser address bar. It should look something like this, depending on where you run it: ``` localhost:4200/heroes;id=15;foo=foo ``` The `id` value appears in the URL as (`;id=15;foo=foo`), not in the URL path. The path for the "Heroes" route doesn't have an `:id` token. The optional route parameters are not separated by "?" and "&" as they would be in the URL query string. They are separated by semicolons ";". This is matrix URL notation. > Matrix URL notation is an idea first introduced in a [1996 proposal](https://www.w3.org/DesignIssues/MatrixURIs.html) by the founder of the web, Tim Berners-Lee. > > Although matrix notation never made it into the HTML standard, it is legal and it became popular among browser routing systems as a way to isolate parameters belonging to parent and child routes. As such, the Router provides support for the matrix notation across browsers. > > ### Route parameters in the `[ActivatedRoute](../api/router/activatedroute)` service In its current state of development, the list of heroes is unchanged. No hero row is highlighted. The `HeroListComponent` needs code that expects parameters. Previously, when navigating from the `HeroListComponent` to the `HeroDetailComponent`, you subscribed to the route parameter map `Observable` and made it available to the `HeroDetailComponent` in the `[ActivatedRoute](../api/router/activatedroute)` service. You injected that service in the constructor of the `HeroDetailComponent`. This time you'll be navigating in the opposite direction, from the `HeroDetailComponent` to the `HeroListComponent`. First, extend the router import statement to include the `[ActivatedRoute](../api/router/activatedroute)` service symbol: ``` import { ActivatedRoute } from '@angular/router'; ``` Import the `switchMap` operator to perform an operation on the `Observable` of route parameter map. ``` import { Observable } from 'rxjs'; import { switchMap } from 'rxjs/operators'; ``` Inject the `[ActivatedRoute](../api/router/activatedroute)` in the `HeroListComponent` constructor. ``` export class HeroListComponent implements OnInit { heroes$!: Observable<Hero[]>; selectedId = 0; constructor( private service: HeroService, private route: ActivatedRoute ) {} ngOnInit() { this.heroes$ = this.route.paramMap.pipe( switchMap(params => { this.selectedId = parseInt(params.get('id')!, 10); return this.service.getHeroes(); }) ); } } ``` The `[ActivatedRoute.paramMap](../api/router/activatedroute#paramMap)` property is an `Observable` map of route parameters. The `paramMap` emits a new map of values that includes `id` when the user navigates to the component. In `ngOnInit()` you subscribe to those values, set the `selectedId`, and get the heroes. Update the template with a [class binding](class-binding). The binding adds the `selected` CSS class when the comparison returns `true` and removes it when `false`. Look for it within the repeated `<li>` tag as shown here: ``` <h2>Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes$ | async" [class.selected]="hero.id === selectedId"> <a [routerLink]="['/hero', hero.id]"> <span class="badge">{{ hero.id }}</span>{{ hero.name }} </a> </li> </ul> <button type="button" routerLink="/sidekicks">Go to sidekicks</button> ``` Add some styles to apply when the hero is selected. ``` .heroes .selected a { background-color: #d6e6f7; } .heroes .selected a:hover { background-color: #bdd7f5; } ``` When the user navigates from the heroes list to the "Magneta" hero and back, "Magneta" appears selected: The optional `foo` route parameter is harmless and the router continues to ignore it. ### Adding routable animations This section shows you how to add some <animations> to the `HeroDetailComponent`. First, import the `[BrowserAnimationsModule](../api/platform-browser/animations/browseranimationsmodule)` and add it to the `imports` array: ``` import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; @NgModule({ imports: [ BrowserAnimationsModule, ], }) ``` Next, add a `data` object to the routes for `HeroListComponent` and `HeroDetailComponent`. Transitions are based on `states` and you use the `[animation](../api/animations/animation)` data from the route to provide a named animation [`state`](../api/animations/state) for the transitions. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { HeroListComponent } from './hero-list/hero-list.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; const heroesRoutes: Routes = [ { path: 'heroes', component: HeroListComponent, data: { animation: 'heroes' } }, { path: 'hero/:id', component: HeroDetailComponent, data: { animation: 'hero' } } ]; @NgModule({ imports: [ RouterModule.forChild(heroesRoutes) ], exports: [ RouterModule ] }) export class HeroesRoutingModule { } ``` Create an `animations.ts` file in the root `src/app/` folder. The contents look like this: ``` import { trigger, animateChild, group, transition, animate, style, query } from '@angular/animations'; // Routable animations export const slideInAnimation = trigger('routeAnimation', [ transition('heroes <=> hero', [ style({ position: 'relative' }), query(':enter, :leave', [ style({ position: 'absolute', top: 0, left: 0, width: '100%' }) ]), query(':enter', [ style({ left: '-100%'}) ]), query(':leave', animateChild()), group([ query(':leave', [ animate('300ms ease-out', style({ left: '100%'})) ]), query(':enter', [ animate('300ms ease-out', style({ left: '0%'})) ]) ]), query(':enter', animateChild()), ]) ]); ``` This file does the following: * Imports the animation symbols that build the animation triggers, control state, and manage transitions between states * Exports a constant named `slideInAnimation` set to an animation trigger named `routeAnimation` * Defines one transition when switching back and forth from the `heroes` and `hero` routes to ease the component in from the left of the screen as it enters the application view (`:enter`), the other to animate the component to the right as it leaves the application view (`:leave`) Back in the `AppComponent`, import the `[RouterOutlet](../api/router/routeroutlet)` token from the `@angular/router` package and the `slideInAnimation` from `'./animations.ts`. Add an `animations` array to the `@[Component](../api/core/component)` metadata that contains the `slideInAnimation`. ``` import { ChildrenOutletContexts } from '@angular/router'; import { slideInAnimation } from './animations'; @Component({ selector: 'app-root', templateUrl: 'app.component.html', styleUrls: ['app.component.css'], animations: [ slideInAnimation ] }) ``` To use the routable animations, wrap the `[RouterOutlet](../api/router/routeroutlet)` inside an element, use the `@routeAnimation` trigger, and bind it to the element. For the `@routeAnimation` transitions to key off states, provide it with the `data` from the `[ActivatedRoute](../api/router/activatedroute)`. The `[RouterOutlet](../api/router/routeroutlet)` is exposed as an `outlet` template variable, so you bind a reference to the router outlet. This example uses a variable of `routerOutlet`. ``` <h1>Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/heroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> </nav> <div [@routeAnimation]="getAnimationData()"> <router-outlet></router-outlet> </div> ``` The `@routeAnimation` property is bound to the `getAnimationData()` which returns the animation property from the `data` provided by the primary route. The `[animation](../api/animations/animation)` property matches the `[transition](../api/animations/transition)` names you used in the `slideInAnimation` defined in `animations.ts`. ``` export class AppComponent { constructor(private contexts: ChildrenOutletContexts) {} getAnimationData() { return this.contexts.getContext('primary')?.route?.snapshot?.data?.['animation']; } } ``` When switching between the two routes, the `HeroDetailComponent` and `HeroListComponent` now ease in from the left when routed to, and slide to the right when navigating away. ### Milestone 3 wrap up This section covered the following: * Organizing the application into feature areas * Navigating imperatively from one component to another * Passing information along in route parameters and subscribe to them in the component * Importing the feature area NgModule into the `AppModule` * Applying routable animations based on the page After these changes, the folder structure is as follows: ``` angular-router-tour-of-heroes src app crisis-list crisis-list.component.css crisis-list.component.html crisis-list.component.ts heroes hero-detail hero-detail.component.css hero-detail.component.html hero-detail.component.ts hero-list hero-list.component.css hero-list.component.html hero-list.component.ts hero.service.ts hero.ts heroes-routing.module.ts heroes.module.ts mock-heroes.ts page-not-found page-not-found.component.css page-not-found.component.html page-not-found.component.ts animations.ts app.component.css app.component.html app.component.ts app.module.ts app-routing.module.ts main.ts message.service.ts index.html styles.css tsconfig.json node_modules … package.json ``` Here are the relevant files for this version of the sample application. ``` import { trigger, animateChild, group, transition, animate, style, query } from '@angular/animations'; // Routable animations export const slideInAnimation = trigger('routeAnimation', [ transition('heroes <=> hero', [ style({ position: 'relative' }), query(':enter, :leave', [ style({ position: 'absolute', top: 0, left: 0, width: '100%' }) ]), query(':enter', [ style({ left: '-100%'}) ]), query(':leave', animateChild()), group([ query(':leave', [ animate('300ms ease-out', style({ left: '100%'})) ]), query(':enter', [ animate('300ms ease-out', style({ left: '0%'})) ]) ]), query(':enter', animateChild()), ]) ]); ``` ``` <h1>Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/heroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> </nav> <div [@routeAnimation]="getAnimationData()"> <router-outlet></router-outlet> </div> ``` ``` import { Component } from '@angular/core'; import { ChildrenOutletContexts } from '@angular/router'; import { slideInAnimation } from './animations'; @Component({ selector: 'app-root', templateUrl: 'app.component.html', styleUrls: ['app.component.css'], animations: [ slideInAnimation ] }) export class AppComponent { constructor(private contexts: ChildrenOutletContexts) {} getAnimationData() { return this.contexts.getContext('primary')?.route?.snapshot?.data?.['animation']; } } ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { AppComponent } from './app.component'; import { AppRoutingModule } from './app-routing.module'; import { HeroesModule } from './heroes/heroes.module'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, FormsModule, HeroesModule, AppRoutingModule ], declarations: [ AppComponent, CrisisListComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; /* . . . */ import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; const appRoutes: Routes = [ { path: 'crisis-center', component: CrisisListComponent }, /* . . . */ { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot( appRoutes, { enableTracing: true } // <-- debugging purposes only ) ], exports: [ RouterModule ] }) export class AppRoutingModule {} ``` ``` /* HeroListComponent's private CSS styles */ .heroes { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 100%; } .heroes li { position: relative; cursor: pointer; } .heroes li:hover { left: .1em; } .heroes a { color: black; text-decoration: none; display: block; font-size: 1.2rem; background-color: #eee; margin: .5rem .5rem .5rem 0; padding: .5rem 0; border-radius: 4px; } .heroes a:hover { color: #2c3a41; background-color: #e6e6e6; } .heroes a:active { background-color: #525252; color: #fafafa; } .heroes .selected a { background-color: #d6e6f7; } .heroes .selected a:hover { background-color: #bdd7f5; } .heroes .badge { padding: .5em .6em; color: white; background-color: #435b60; min-width: 16px; margin-right: .8em; border-radius: 4px 0 0 4px; } ``` ``` <h2>Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes$ | async" [class.selected]="hero.id === selectedId"> <a [routerLink]="['/hero', hero.id]"> <span class="badge">{{ hero.id }}</span>{{ hero.name }} </a> </li> </ul> <button type="button" routerLink="/sidekicks">Go to sidekicks</button> ``` ``` // TODO: Feature Componetized like CrisisCenter import { Observable } from 'rxjs'; import { switchMap } from 'rxjs/operators'; import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { HeroService } from '../hero.service'; import { Hero } from '../hero'; @Component({ selector: 'app-hero-list', templateUrl: './hero-list.component.html', styleUrls: ['./hero-list.component.css'] }) export class HeroListComponent implements OnInit { heroes$!: Observable<Hero[]>; selectedId = 0; constructor( private service: HeroService, private route: ActivatedRoute ) {} ngOnInit() { this.heroes$ = this.route.paramMap.pipe( switchMap(params => { this.selectedId = parseInt(params.get('id')!, 10); return this.service.getHeroes(); }) ); } } ``` ``` <h2>Heroes</h2> <div *ngIf="hero$ | async as hero"> <h3>{{ hero.name }}</h3> <p>Id: {{ hero.id }}</p> <label for="hero-name">Hero name: </label> <input type="text" id="hero-name" [(ngModel)]="hero.name" placeholder="name"/> <button type="button" (click)="gotoHeroes(hero)">Back</button> </div> ``` ``` import { switchMap } from 'rxjs/operators'; import { Component, OnInit } from '@angular/core'; import { Router, ActivatedRoute, ParamMap } from '@angular/router'; import { Observable } from 'rxjs'; import { HeroService } from '../hero.service'; import { Hero } from '../hero'; @Component({ selector: 'app-hero-detail', templateUrl: './hero-detail.component.html', styleUrls: ['./hero-detail.component.css'] }) export class HeroDetailComponent implements OnInit { hero$!: Observable<Hero>; constructor( private route: ActivatedRoute, private router: Router, private service: HeroService ) {} ngOnInit() { this.hero$ = this.route.paramMap.pipe( switchMap((params: ParamMap) => this.service.getHero(params.get('id')!)) ); } gotoHeroes(hero: Hero) { const heroId = hero ? hero.id : null; // Pass along the hero id if available // so that the HeroList component can select that hero. // Include a junk 'foo' property for fun. this.router.navigate(['/heroes', { id: heroId, foo: 'foo' }]); } } ``` ``` import { Injectable } from '@angular/core'; import { Observable, of } from 'rxjs'; import { map } from 'rxjs/operators'; import { Hero } from './hero'; import { HEROES } from './mock-heroes'; import { MessageService } from '../message.service'; @Injectable({ providedIn: 'root', }) export class HeroService { constructor(private messageService: MessageService) { } getHeroes(): Observable<Hero[]> { // TODO: send the message _after_ fetching the heroes this.messageService.add('HeroService: fetched heroes'); return of(HEROES); } getHero(id: number | string) { return this.getHeroes().pipe( // (+) before `id` turns the string into a number map((heroes: Hero[]) => heroes.find(hero => hero.id === +id)!) ); } } ``` ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { HeroListComponent } from './hero-list/hero-list.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; import { HeroesRoutingModule } from './heroes-routing.module'; @NgModule({ imports: [ CommonModule, FormsModule, HeroesRoutingModule ], declarations: [ HeroListComponent, HeroDetailComponent ] }) export class HeroesModule {} ``` ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { HeroListComponent } from './hero-list/hero-list.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; const heroesRoutes: Routes = [ { path: 'heroes', component: HeroListComponent, data: { animation: 'heroes' } }, { path: 'hero/:id', component: HeroDetailComponent, data: { animation: 'hero' } } ]; @NgModule({ imports: [ RouterModule.forChild(heroesRoutes) ], exports: [ RouterModule ] }) export class HeroesRoutingModule { } ``` ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root', }) export class MessageService { messages: string[] = []; add(message: string) { this.messages.push(message); } clear() { this.messages = []; } } ``` Milestone 4: Crisis center feature ---------------------------------- This section shows you how to add child routes and use relative routing in your app. To add more features to the application's current crisis center, take similar steps as for the heroes feature: * Create a `crisis-center` subfolder in the `src/app` folder * Copy the files and folders from `app/heroes` into the new `crisis-center` folder * In the new files, change every mention of "hero" to "crisis", and "heroes" to "crises" * Rename the NgModule files to `crisis-center.module.ts` and `crisis-center-routing.module.ts` Use mock crises instead of mock heroes: ``` import { Crisis } from './crisis'; export const CRISES: Crisis[] = [ { id: 1, name: 'Dragon Burning Cities' }, { id: 2, name: 'Sky Rains Great White Sharks' }, { id: 3, name: 'Giant Asteroid Heading For Earth' }, { id: 4, name: 'Procrastinators Meeting Delayed Again' }, ]; ``` The resulting crisis center is a foundation for introducing a new concept —child routing. You can leave Heroes in its current state as a contrast with the Crisis Center. > In keeping with the [Separation of Concerns principle](https://blog.8thlight.com/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html "Separation of Concerns"), changes to the Crisis Center don't affect the `AppModule` or any other feature's component. > > ### A crisis center with child routes This section shows you how to organize the crisis center to conform to the following recommended pattern for Angular applications: * Each feature area resides in its own folder * Each feature has its own Angular feature module * Each area has its own area root component * Each area root component has its own router outlet and child routes * Feature area routes rarely (if ever) cross with routes of other features If your application had many feature areas, the component trees might consist of multiple components for those features, each with branches of other, related, components. ### Child routing component Generate a `CrisisCenter` component in the `crisis-center` folder: ``` ng generate component crisis-center/crisis-center ``` Update the component template with the following markup: ``` <h2>Crisis Center</h2> <router-outlet></router-outlet> ``` The `CrisisCenterComponent` has the following in common with the `AppComponent`: * It is the root of the crisis center area, just as `AppComponent` is the root of the entire application * It is a shell for the crisis management feature area, just as the `AppComponent` is a shell to manage the high-level workflow Like most shells, the `CrisisCenterComponent` class is minimal because it has no business logic, and its template has no links, just a title and `<[router-outlet](../api/router/routeroutlet)>` for the crisis center child component. ### Child route configuration As a host page for the "Crisis Center" feature, generate a `CrisisCenterHome` component in the `crisis-center` folder. ``` ng generate component crisis-center/crisis-center-home ``` Update the template with a welcome message to the `Crisis Center`. ``` <h3>Welcome to the Crisis Center</h3> ``` Update the `crisis-center-routing.module.ts` you renamed after copying it from `heroes-routing.module.ts` file. This time, you define child routes within the parent `crisis-center` route. ``` const crisisCenterRoutes: Routes = [ { path: 'crisis-center', component: CrisisCenterComponent, children: [ { path: '', component: CrisisListComponent, children: [ { path: ':id', component: CrisisDetailComponent }, { path: '', component: CrisisCenterHomeComponent } ] } ] } ]; ``` Notice that the parent `crisis-center` route has a `children` property with a single route containing the `CrisisListComponent`. The `CrisisListComponent` route also has a `children` array with two routes. These two routes navigate to the crisis center child components, `CrisisCenterHomeComponent` and `CrisisDetailComponent`, respectively. There are important differences in the way the router treats child routes. The router displays the components of these routes in the `[RouterOutlet](../api/router/routeroutlet)` of the `CrisisCenterComponent`, not in the `[RouterOutlet](../api/router/routeroutlet)` of the `AppComponent` shell. The `CrisisListComponent` contains the crisis list and a `[RouterOutlet](../api/router/routeroutlet)` to display the `Crisis Center Home` and `Crisis Detail` route components. The `Crisis Detail` route is a child of the `Crisis List`. The router [reuses components](router-tutorial-toh#reuse) by default, so the `Crisis Detail` component is re-used as you select different crises. In contrast, back in the `Hero Detail` route, [the component was recreated](router-tutorial-toh#snapshot-the-no-observable-alternative) each time you selected a different hero from the list of heroes. At the top level, paths that begin with `/` refer to the root of the application. But child routes extend the path of the parent route. With each step down the route tree, you add a slash followed by the route path, unless the path is empty. Apply that logic to navigation within the crisis center for which the parent path is `/crisis-center`. * To navigate to the `CrisisCenterHomeComponent`, the full URL is `/crisis-center` (`/crisis-center` + `''` + `''`) * To navigate to the `CrisisDetailComponent` for a crisis with `id=2`, the full URL is `/crisis-center/2` (`/crisis-center` + `''` + `'/2'`) The absolute URL for the latter example, including the `localhost` origin, is as follows: ``` localhost:4200/crisis-center/2 ``` Here's the complete `crisis-center-routing.module.ts` file with its imports. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisCenterHomeComponent } from './crisis-center-home/crisis-center-home.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { CrisisCenterComponent } from './crisis-center/crisis-center.component'; import { CrisisDetailComponent } from './crisis-detail/crisis-detail.component'; const crisisCenterRoutes: Routes = [ { path: 'crisis-center', component: CrisisCenterComponent, children: [ { path: '', component: CrisisListComponent, children: [ { path: ':id', component: CrisisDetailComponent }, { path: '', component: CrisisCenterHomeComponent } ] } ] } ]; @NgModule({ imports: [ RouterModule.forChild(crisisCenterRoutes) ], exports: [ RouterModule ] }) export class CrisisCenterRoutingModule { } ``` ### Import crisis center module into the `AppModule` routes As with the `HeroesModule`, you must add the `CrisisCenterModule` to the `imports` array of the `AppModule` *before* the `AppRoutingModule`: ``` import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { CommonModule } from '@angular/common'; import { CrisisCenterHomeComponent } from './crisis-center-home/crisis-center-home.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { CrisisCenterComponent } from './crisis-center/crisis-center.component'; import { CrisisDetailComponent } from './crisis-detail/crisis-detail.component'; import { CrisisCenterRoutingModule } from './crisis-center-routing.module'; @NgModule({ imports: [ CommonModule, FormsModule, CrisisCenterRoutingModule ], declarations: [ CrisisCenterComponent, CrisisListComponent, CrisisCenterHomeComponent, CrisisDetailComponent ] }) export class CrisisCenterModule {} ``` ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; import { ComposeMessageComponent } from './compose-message/compose-message.component'; import { AppRoutingModule } from './app-routing.module'; import { HeroesModule } from './heroes/heroes.module'; import { CrisisCenterModule } from './crisis-center/crisis-center.module'; @NgModule({ imports: [ CommonModule, FormsModule, HeroesModule, CrisisCenterModule, AppRoutingModule ], declarations: [ AppComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` > The import order of the modules is important because the order of the routes defined in the modules affects route matching. If the `AppModule` were imported first, its wildcard route (`path: '**'`) would take precedence over the routes defined in `CrisisCenterModule`. For more information, see the section on [route order](router#route-order). > > Remove the initial crisis center route from the `app-routing.module.ts` because now the `HeroesModule` and the `CrisisCenter` modules provide the feature routes. The `app-routing.module.ts` file retains the top-level application routes such as the default and wildcard routes. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; const appRoutes: Routes = [ { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot( appRoutes, { enableTracing: true } // <-- debugging purposes only ) ], exports: [ RouterModule ] }) export class AppRoutingModule {} ``` ### Relative navigation While building out the crisis center feature, you navigated to the crisis detail route using an absolute path that begins with a slash. The router matches such absolute paths to routes starting from the top of the route configuration. You could continue to use absolute paths like this to navigate inside the Crisis Center feature, but that pins the links to the parent routing structure. If you changed the parent `/crisis-center` path, you would have to change the link parameters array. You can free the links from this dependency by defining paths that are relative to the current URL segment. Navigation within the feature area remains intact even if you change the parent route path to the feature. > The router supports directory-like syntax in a *link parameters list* to help guide route name lookup: > > > > | Directory-like syntax | Details | > | --- | --- | > | `./` `no leading slash` | Relative to the current level. | > | `../` | Up one level in the route path. | > > You can combine relative navigation syntax with an ancestor path. If you must navigate to a sibling route, you could use the `../<sibling>` convention to go up one level, then over and down the sibling route path. > > To navigate a relative path with the `Router.navigate` method, you must supply the `[ActivatedRoute](../api/router/activatedroute)` to give the router knowledge of where you are in the current route tree. After the *link parameters array*, add an object with a `relativeTo` property set to the `[ActivatedRoute](../api/router/activatedroute)`. The router then calculates the target URL based on the active route's location. > Always specify the complete absolute path when calling router's `navigateByUrl()` method. > > ### Navigate to crisis list with a relative URL You've already injected the `[ActivatedRoute](../api/router/activatedroute)` that you need to compose the relative navigation path. When using a `[RouterLink](../api/router/routerlink)` to navigate instead of the `[Router](../api/router/router)` service, you'd use the same link parameters array, but you wouldn't provide the object with the `relativeTo` property. The `[ActivatedRoute](../api/router/activatedroute)` is implicit in a `[RouterLink](../api/router/routerlink)` directive. Update the `gotoCrises()` method of the `CrisisDetailComponent` to navigate back to the Crisis Center list using relative path navigation. ``` // Relative navigation back to the crises this.router.navigate(['../', { id: crisisId, foo: 'foo' }], { relativeTo: this.route }); ``` Notice that the path goes up a level using the `../` syntax. If the current crisis `id` is `3`, the resulting path back to the crisis list is `/crisis-center/;id=3;foo=foo`. ### Displaying multiple routes in named outlets You decide to give users a way to contact the crisis center. When a user clicks a "Contact" button, you want to display a message in a popup view. The popup should stay open, even when switching between pages in the application, until the user closes it by sending the message or canceling. Clearly you can't put the popup in the same outlet as the other pages. Until now, you've defined a single outlet and you've nested child routes under that outlet to group routes together. The router only supports one primary unnamed outlet per template. A template can also have any number of named outlets. Each named outlet has its own set of routes with their own components. Multiple outlets can display different content, determined by different routes, all at the same time. Add an outlet named "popup" in the `AppComponent`, directly following the unnamed outlet. ``` <div [@routeAnimation]="getAnimationData()"> <router-outlet></router-outlet> </div> <router-outlet name="popup"></router-outlet> ``` That's where a popup goes, once you learn how to route a popup component to it. #### Secondary routes Named outlets are the targets of *secondary routes*. Secondary routes look like primary routes and you configure them the same way. They differ in a few key respects. * They are independent of each other * They work in combination with other routes * They are displayed in named outlets Generate a new component to compose the message. ``` ng generate component compose-message ``` It displays a short form with a header, an input box for the message, and two buttons, "Send" and "Cancel". Here's the component, its template, and styles: ``` <h3>Contact Crisis Center</h3> <div *ngIf="details"> {{ details }} </div> <div> <div> <label for="message">Enter your message: </label> </div> <div> <textarea id="message" [(ngModel)]="message" rows="10" cols="35" [disabled]="sending"></textarea> </div> </div> <p *ngIf="!sending"> <button type="button" (click)="send()">Send</button> <button type="button" (click)="cancel()">Cancel</button> </p> ``` ``` import { Component } from '@angular/core'; import { Router } from '@angular/router'; @Component({ selector: 'app-compose-message', templateUrl: './compose-message.component.html', styleUrls: ['./compose-message.component.css'] }) export class ComposeMessageComponent { details = ''; message = ''; sending = false; constructor(private router: Router) {} send() { this.sending = true; this.details = 'Sending Message...'; setTimeout(() => { this.sending = false; this.closePopup(); }, 1000); } cancel() { this.closePopup(); } closePopup() { // Providing a `null` value to the named outlet // clears the contents of the named outlet this.router.navigate([{ outlets: { popup: null }}]); } } ``` ``` textarea { width: 100%; margin-top: 1rem; font-size: 1.2rem; box-sizing: border-box; } ``` It looks similar to any other component in this guide, but there are two key differences. > **NOTE**: The `send()` method simulates latency by waiting a second before "sending" the message and closing the popup. > > The `closePopup()` method closes the popup view by navigating to the popup outlet with a `null` which the section on [clearing secondary routes](router-tutorial-toh#clear-secondary-routes) covers. #### Add a secondary route Open the `AppRoutingModule` and add a new `compose` route to the `appRoutes`. ``` { path: 'compose', component: ComposeMessageComponent, outlet: 'popup' }, ``` In addition to the `path` and `component` properties, there's a new property called `outlet`, which is set to `'popup'`. This route now targets the popup outlet and the `ComposeMessageComponent` will display there. To give users a way to open the popup, add a "Contact" link to the `AppComponent` template. ``` <a [routerLink]="[{ outlets: { popup: ['compose'] } }]">Contact</a> ``` Although the `compose` route is configured to the "popup" outlet, that's not sufficient for connecting the route to a `[RouterLink](../api/router/routerlink)` directive. You have to specify the named outlet in a *link parameters array* and bind it to the `[RouterLink](../api/router/routerlink)` with a property binding. The *link parameters array* contains an object with a single `outlets` property whose value is another object keyed by one (or more) outlet names. In this case there is only the "popup" outlet property and its value is another *link parameters array* that specifies the `compose` route. In other words, when the user clicks this link, the router displays the component associated with the `compose` route in the `popup` outlet. > This `outlets` object within an outer object was unnecessary when there was only one route and one unnamed outlet. > > The router assumed that your route specification targeted the unnamed primary outlet and created these objects for you. > > Routing to a named outlet revealed a router feature: you can target multiple outlets with multiple routes in the same `[RouterLink](../api/router/routerlink)` directive. > > #### Secondary route navigation: merging routes during navigation Navigate to the *Crisis Center* and click "Contact". you should see something like the following URL in the browser address bar. ``` http://…/crisis-center(popup:compose) ``` The relevant part of the URL follows the `...`: * The `crisis-center` is the primary navigation * Parentheses surround the secondary route * The secondary route consists of an outlet name (`popup`), a `colon` separator, and the secondary route path (`compose`) Click the *Heroes* link and look at the URL again. ``` http://…/heroes(popup:compose) ``` The primary navigation part changed; the secondary route is the same. The router is keeping track of two separate branches in a navigation tree and generating a representation of that tree in the URL. You can add many more outlets and routes, at the top level and in nested levels, creating a navigation tree with many branches and the router will generate the URLs to go with it. You can tell the router to navigate an entire tree at once by filling out the `outlets` object and then pass that object inside a *link parameters array* to the `router.navigate` method. #### Clearing secondary routes Like regular outlets, secondary outlets persists until you navigate away to a new component. Each secondary outlet has its own navigation, independent of the navigation driving the primary outlet. Changing a current route that displays in the primary outlet has no effect on the popup outlet. That's why the popup stays visible as you navigate among the crises and heroes. The `closePopup()` method again: ``` closePopup() { // Providing a `null` value to the named outlet // clears the contents of the named outlet this.router.navigate([{ outlets: { popup: null }}]); } ``` Clicking the "send" or "cancel" buttons clears the popup view. The `closePopup()` function navigates imperatively with the `[Router.navigate()](../api/router/router#navigate)` method, passing in a [link parameters array](router-tutorial-toh#link-parameters-array). Like the array bound to the *Contact* `[RouterLink](../api/router/routerlink)` in the `AppComponent`, this one includes an object with an `outlets` property. The `outlets` property value is another object with outlet names for keys. The only named outlet is `'popup'`. This time, the value of `'popup'` is `null`. That's not a route, but it is a legitimate value. Setting the popup `[RouterOutlet](../api/router/routeroutlet)` to `null` clears the outlet and removes the secondary popup route from the current URL. Milestone 5: Route guards ------------------------- At the moment, any user can navigate anywhere in the application any time, but sometimes you need to control access to different parts of your application for various reasons, some of which might include the following: * Perhaps the user is not authorized to navigate to the target component * Maybe the user must login (authenticate) first * Maybe you should fetch some data before you display the target component * You might want to save pending changes before leaving a component * You might ask the user if it's okay to discard pending changes rather than save them You add guards to the route configuration to handle these scenarios. A guard's return value controls the router's behavior: | Guard return value | Details | | --- | --- | | `true` | The navigation process continues | | `false` | The navigation process stops and the user stays put | | `[UrlTree](../api/router/urltree)` | The current navigation cancels and a new navigation is initiated to the `[UrlTree](../api/router/urltree)` returned | > **Note:** The guard can also tell the router to navigate elsewhere, effectively canceling the current navigation. When doing so inside a guard, the guard should return `[UrlTree](../api/router/urltree)`. > > The guard might return its boolean answer synchronously. But in many cases, the guard can't produce an answer synchronously. The guard could ask the user a question, save changes to the server, or fetch fresh data. These are all asynchronous operations. Accordingly, a routing guard can return an `Observable<boolean>` or a `Promise<boolean>` and the router will wait for the observable or the promise to resolve to `true` or `false`. > **NOTE**: The observable provided to the `[Router](../api/router/router)` automatically completes after it retrieves the first value. > > The router supports multiple guard methods: | Guard interfaces | Details | | --- | --- | | [`canActivate`](../api/router/canactivatefn) | To mediate navigation *to* a route | | [`canActivateChild`](../api/router/canactivatechildfn) | To mediate navigation *to* a child route | | [`canDeactivate`](../api/router/candeactivatefn) | To mediate navigation *away* from the current route | | [`resolve`](../api/router/resolvefn) | To perform route data retrieval *before* route activation | | [`canMatch`](../api/router/canmatchfn) | To control whether a `[Route](../api/router/route)` should be used at all, even if the `path` matches the URL segment | You can have multiple guards at every level of a routing hierarchy. The router checks the `canDeactivate` guards first, from the deepest child route to the top. Then it checks the `canActivate` and `canActivateChild` guards from the top down to the deepest child route. If the feature module is loaded asynchronously, the `canMatch` guard is checked before the module is loaded. With the exception of `canMatch`, if *any* guard returns false, pending guards that have not completed are canceled, and the entire navigation is canceled. If a `canMatch` guard returns `false`, the `[Router](../api/router/router)` continues processing the rest of the `[Routes](../api/router/routes)` to see if a different `[Route](../api/router/route)` config matches the URL. You can think of this as though the `[Router](../api/router/router)` is pretending the `[Route](../api/router/route)` with the `canMatch` guard did not exist. There are several examples over the next few sections. ### `canActivate`: requiring authentication Applications often restrict access to a feature area based on who the user is. You could permit access only to authenticated users or to users with a specific role. You might block or limit access until the user's account is activated. The `canActivate` guard is the tool to manage these navigation business rules. #### Add an admin feature module This section guides you through extending the crisis center with some new administrative features. Start by adding a new feature module named `AdminModule`. Generate an `admin` folder with a feature module file and a routing configuration file. ``` ng generate module admin --routing ``` Next, generate the supporting components. ``` ng generate component admin/admin-dashboard ``` ``` ng generate component admin/admin ``` ``` ng generate component admin/manage-crises ``` ``` ng generate component admin/manage-heroes ``` The admin feature file structure looks like this: ``` src/app/admin admin admin.component.css admin.component.html admin.component.ts admin-dashboard admin-dashboard.component.css admin-dashboard.component.html admin-dashboard.component.ts manage-crises manage-crises.component.css manage-crises.component.html manage-crises.component.ts manage-heroes manage-heroes.component.css manage-heroes.component.html manage-heroes.component.ts admin.module.ts admin-routing.module.ts ``` The admin feature module contains the `AdminComponent` used for routing within the feature module, a dashboard route and two unfinished components to manage crises and heroes. ``` <h2>Admin</h2> <nav> <a routerLink="./" routerLinkActive="active" [routerLinkActiveOptions]="{ exact: true }" ariaCurrentWhenActive="page">Dashboard</a> <a routerLink="./crises" routerLinkActive="active" ariaCurrentWhenActive="page">Manage Crises</a> <a routerLink="./heroes" routerLinkActive="active" ariaCurrentWhenActive="page">Manage Heroes</a> </nav> <router-outlet></router-outlet> ``` ``` <h3>Dashboard</h3> ``` ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { AdminComponent } from './admin/admin.component'; import { AdminDashboardComponent } from './admin-dashboard/admin-dashboard.component'; import { ManageCrisesComponent } from './manage-crises/manage-crises.component'; import { ManageHeroesComponent } from './manage-heroes/manage-heroes.component'; import { AdminRoutingModule } from './admin-routing.module'; @NgModule({ imports: [ CommonModule, AdminRoutingModule ], declarations: [ AdminComponent, AdminDashboardComponent, ManageCrisesComponent, ManageHeroesComponent ] }) export class AdminModule {} ``` ``` <p>Manage your crises here</p> ``` ``` <p>Manage your heroes here</p> ``` > Although the admin dashboard `[RouterLink](../api/router/routerlink)` only contains a relative slash without an additional URL segment, it is a match to any route within the admin feature area. You only want the `Dashboard` link to be active when the user visits that route. Adding an additional binding to the `Dashboard` routerLink,`[routerLinkActiveOptions]="{ exact: true }"`, marks the `./` link as active when the user navigates to the `/admin` URL and not when navigating to any of the child routes. > > ##### Component-less route: grouping routes without a component The initial admin routing configuration: ``` const adminRoutes: Routes = [ { path: 'admin', component: AdminComponent, children: [ { path: '', children: [ { path: 'crises', component: ManageCrisesComponent }, { path: 'heroes', component: ManageHeroesComponent }, { path: '', component: AdminDashboardComponent } ] } ] } ]; @NgModule({ imports: [ RouterModule.forChild(adminRoutes) ], exports: [ RouterModule ] }) export class AdminRoutingModule {} ``` The child route under the `AdminComponent` has a `path` and a `children` property but it's not using a `component`. This defines a *component-less* route. To group the `Crisis Center` management routes under the `admin` path a component is unnecessary. Additionally, a *component-less* route makes it easier to [guard child routes](router-tutorial-toh#can-activate-child-guard). Next, import the `AdminModule` into `app.module.ts` and add it to the `imports` array to register the admin routes. ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; import { ComposeMessageComponent } from './compose-message/compose-message.component'; import { AppRoutingModule } from './app-routing.module'; import { HeroesModule } from './heroes/heroes.module'; import { CrisisCenterModule } from './crisis-center/crisis-center.module'; import { AdminModule } from './admin/admin.module'; @NgModule({ imports: [ CommonModule, FormsModule, HeroesModule, CrisisCenterModule, AdminModule, AppRoutingModule ], declarations: [ AppComponent, ComposeMessageComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` Add an "Admin" link to the `AppComponent` shell so that users can get to this feature. ``` <h1 class="title">Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/heroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> <a routerLink="/admin" routerLinkActive="active" ariaCurrentWhenActive="page">Admin</a> <a [routerLink]="[{ outlets: { popup: ['compose'] } }]">Contact</a> </nav> <div [@routeAnimation]="getAnimationData()"> <router-outlet></router-outlet> </div> <router-outlet name="popup"></router-outlet> ``` #### Guard the admin feature Currently, every route within the Crisis Center is open to everyone. The new admin feature should be accessible only to authenticated users. Write a `canActivate()` guard method to redirect anonymous users to the login page when they try to enter the admin area. Create a new file named `auth.guard.ts` in the `auth` folder. The `auth.guard.ts` file will contain the `authGuard` function. To demonstrate the fundamentals, this example only logs to the console, returns `true` immediately, and lets navigation proceed: ``` export const authGuard = () => { console.log('authGuard#canActivate called'); return true; }; ``` Next, open `admin-routing.module.ts`, import the `authGuard` function, and update the admin route with a `canActivate` guard property that references it: ``` import {authGuard} from '../auth/auth.guard'; import {AdminDashboardComponent} from './admin-dashboard/admin-dashboard.component'; import {AdminComponent} from './admin/admin.component'; import {ManageCrisesComponent} from './manage-crises/manage-crises.component'; import {ManageHeroesComponent} from './manage-heroes/manage-heroes.component'; const adminRoutes: Routes = [{ path: 'admin', component: AdminComponent, canActivate: [authGuard], children: [{ path: '', children: [ {path: 'crises', component: ManageCrisesComponent}, {path: 'heroes', component: ManageHeroesComponent}, {path: '', component: AdminDashboardComponent} ], }] }]; @NgModule({imports: [RouterModule.forChild(adminRoutes)], exports: [RouterModule]}) export class AdminRoutingModule { } ``` The admin feature is now protected by the guard, but the guard requires more customization to work fully. #### Authenticate with `authGuard` Make the `authGuard` mimic authentication. The `authGuard` should call an application service that can log in a user and retain information about the current user. Generate a new `AuthService` in the `auth` folder: ``` ng generate service auth/auth ``` Update the `AuthService` to log in the user: ``` import { Injectable } from '@angular/core'; import { Observable, of } from 'rxjs'; import { tap, delay } from 'rxjs/operators'; @Injectable({ providedIn: 'root', }) export class AuthService { isLoggedIn = false; // store the URL so we can redirect after logging in redirectUrl: string | null = null; login(): Observable<boolean> { return of(true).pipe( delay(1000), tap(() => this.isLoggedIn = true) ); } logout(): void { this.isLoggedIn = false; } } ``` Although it doesn't actually log in, it has an `isLoggedIn` flag to tell you whether the user is authenticated. Its `login()` method simulates an API call to an external service by returning an observable that resolves successfully after a short pause. The `redirectUrl` property stores the URL that the user wanted to access so you can navigate to it after authentication. > To keep things minimal, this example redirects unauthenticated users to `/admin`. > > Revise the `authGuard` to call the `AuthService`. ``` import {inject} from '@angular/core'; import { Router } from '@angular/router'; import {AuthService} from './auth.service'; export const authGuard = () => { const authService = inject(AuthService); const router = inject(Router); if (authService.isLoggedIn) { return true; } // Redirect to the login page return router.parseUrl('/login'); }; ``` This guard returns a synchronous boolean result or a `[UrlTree](../api/router/urltree)`. If the user is logged in, it returns `true` and the navigation continues. Otherwise, it redirects to a login page; a page you haven't created yet. Returning a `[UrlTree](../api/router/urltree)` tells the `[Router](../api/router/router)` to cancel the current navigation and schedule a new one to redirect the user. #### Add the `LoginComponent` You need a `LoginComponent` for the user to log in to the application. After logging in, you'll redirect to the stored URL if available, or use the default URL. There is nothing new about this component or the way you use it in the router configuration. ``` ng generate component auth/login ``` Register a `/login` route in the `auth/auth-routing.module.ts` file. In `app.module.ts`, import and add `AuthModule` to the `AppModule` imports array. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { AppComponent } from './app.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; import { ComposeMessageComponent } from './compose-message/compose-message.component'; import { AppRoutingModule } from './app-routing.module'; import { HeroesModule } from './heroes/heroes.module'; import { AuthModule } from './auth/auth.module'; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, FormsModule, HeroesModule, AuthModule, AppRoutingModule, ], declarations: [ AppComponent, ComposeMessageComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` <h2>Login</h2> <p>{{message}}</p> <p> <button type="button" (click)="login()" *ngIf="!authService.isLoggedIn">Login</button> <button type="button" (click)="logout()" *ngIf="authService.isLoggedIn">Logout</button> </p> ``` ``` import { Component } from '@angular/core'; import { Router } from '@angular/router'; import { AuthService } from '../auth.service'; @Component({ selector: 'app-login', templateUrl: './login.component.html', styleUrls: ['./login.component.css'] }) export class LoginComponent { message: string; constructor(public authService: AuthService, public router: Router) { this.message = this.getMessage(); } getMessage() { return 'Logged ' + (this.authService.isLoggedIn ? 'in' : 'out'); } login() { this.message = 'Trying to log in ...'; this.authService.login().subscribe(() => { this.message = this.getMessage(); if (this.authService.isLoggedIn) { // Usually you would use the redirect URL from the auth service. // However to keep the example simple, we will always redirect to `/admin`. const redirectUrl = '/admin'; // Redirect the user this.router.navigate([redirectUrl]); } }); } logout() { this.authService.logout(); this.message = this.getMessage(); } } ``` ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { LoginComponent } from './login/login.component'; import { AuthRoutingModule } from './auth-routing.module'; @NgModule({ imports: [ CommonModule, FormsModule, AuthRoutingModule ], declarations: [ LoginComponent ] }) export class AuthModule {} ``` ### `canActivateChild`: guarding child routes You can also protect child routes with the `canActivateChild` guard. The `canActivateChild` guard is similar to the `canActivate` guard. The key difference is that it runs before any child route is activated. You protected the admin feature module from unauthorized access. You should also protect child routes *within* the feature module. Add the same `authGuard` to the `component-less` admin route to protect all other child routes at one time instead of adding the `authGuard` to each route individually. ``` const adminRoutes: Routes = [ { path: 'admin', component: AdminComponent, canActivate: [authGuard], children: [ { path: '', canActivateChild: [authGuard], children: [ { path: 'crises', component: ManageCrisesComponent }, { path: 'heroes', component: ManageHeroesComponent }, { path: '', component: AdminDashboardComponent } ] } ] } ]; @NgModule({ imports: [ RouterModule.forChild(adminRoutes) ], exports: [ RouterModule ] }) export class AdminRoutingModule {} ``` ### `canDeactivate`: handling unsaved changes Back in the "Heroes" workflow, the application accepts every change to a hero immediately without validation. In the real world, you might have to accumulate the users changes, validate across fields, validate on the server, or hold changes in a pending state until the user confirms them as a group or cancels and reverts all changes. When the user navigates away, you can let the user decide what to do with unsaved changes. If the user cancels, you'll stay put and allow more changes. If the user approves, the application can save. You still might delay navigation until the save succeeds. If you let the user move to the next screen immediately and saving were to fail (perhaps the data is ruled invalid), you would lose the context of the error. You need to stop the navigation while you wait, asynchronously, for the server to return with its answer. The `canDeactivate` guard helps you decide what to do with unsaved changes and how to proceed. #### Cancel and save Users update crisis information in the `CrisisDetailComponent`. Unlike the `HeroDetailComponent`, the user changes do not update the crisis entity immediately. Instead, the application updates the entity when the user presses the Save button and discards the changes when the user presses the Cancel button. Both buttons navigate back to the crisis list after save or cancel. ``` cancel() { this.gotoCrises(); } save() { this.crisis.name = this.editName; this.gotoCrises(); } ``` In this scenario, the user could click the heroes link, cancel, push the browser back button, or navigate away without saving. This example application asks the user to be explicit with a confirmation dialog box that waits asynchronously for the user's response. > You could wait for the user's answer with synchronous, blocking code, however, the application is more responsive —and can do other work— by waiting for the user's answer asynchronously. > > Generate a `Dialog` service to handle user confirmation. ``` ng generate service dialog ``` Add a `confirm()` method to the `DialogService` to prompt the user to confirm their intent. The `window.confirm` is a blocking action that displays a modal dialog and waits for user interaction. ``` import { Injectable } from '@angular/core'; import { Observable, of } from 'rxjs'; /** * Async modal dialog service * DialogService makes this app easier to test by faking this service. * TODO: better modal implementation that doesn't use window.confirm */ @Injectable({ providedIn: 'root', }) export class DialogService { /** * Ask user to confirm an action. `message` explains the action and choices. * Returns observable resolving to `true`=confirm or `false`=cancel */ confirm(message?: string): Observable<boolean> { const confirmation = window.confirm(message || 'Is it OK?'); return of(confirmation); } } ``` It returns an `Observable` that resolves when the user eventually decides what to do: either to discard changes and navigate away (`true`) or to preserve the pending changes and stay in the crisis editor (`false`). Create a guard that checks for the presence of a `canDeactivate()` method in a component —any component. Paste the following code into your guard. ``` import { CanDeactivateFn } from '@angular/router'; import { Observable } from 'rxjs'; export interface CanComponentDeactivate { canDeactivate?: () => Observable<boolean> | Promise<boolean> | boolean; } export const canDeactivateGuard: CanDeactivateFn<CanComponentDeactivate> = (component: CanComponentDeactivate) => component.canDeactivate ? component.canDeactivate() : true; ``` While the guard doesn't have to know which component has a `deactivate` method, it can detect that the `CrisisDetailComponent` component has the `canDeactivate()` method and call it. The guard not knowing the details of any component's deactivation method makes the guard reusable. Alternatively, you could make a component-specific `canDeactivate` guard for the `CrisisDetailComponent`. The `canDeactivate()` method provides you with the current instance of the `component`, the current `[ActivatedRoute](../api/router/activatedroute)`, and `[RouterStateSnapshot](../api/router/routerstatesnapshot)` in case you needed to access some external information. This would be useful if you only wanted to use this guard for this component and needed to get the component's properties or confirm whether the router should allow navigation away from it. ``` import { Observable } from 'rxjs'; import { CanDeactivateFn, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; import { CrisisDetailComponent } from './crisis-center/crisis-detail/crisis-detail.component'; export const canDeactivateGuard: CanDeactivateFn<CrisisDetailComponent> = ( component: CrisisDetailComponent, route: ActivatedRouteSnapshot, state: RouterStateSnapshot ): Observable<boolean> | boolean => { // Get the Crisis Center ID console.log(route.paramMap.get('id')); // Get the current URL console.log(state.url); // Allow synchronous navigation (`true`) if no crisis or the crisis is unchanged if (!component.crisis || component.crisis.name === component.editName) { return true; } // Otherwise ask the user with the dialog service and return its // observable which resolves to true or false when the user decides return component.dialogService.confirm('Discard changes?'); }; ``` Looking back at the `CrisisDetailComponent`, it implements the confirmation workflow for unsaved changes. ``` canDeactivate(): Observable<boolean> | boolean { // Allow synchronous navigation (`true`) if no crisis or the crisis is unchanged if (!this.crisis || this.crisis.name === this.editName) { return true; } // Otherwise ask the user with the dialog service and return its // observable which resolves to true or false when the user decides return this.dialogService.confirm('Discard changes?'); } ``` Notice that the `canDeactivate()` method can return synchronously; it returns `true` immediately if there is no crisis or there are no pending changes. But it can also return a `Promise` or an `Observable` and the router will wait for that to resolve to truthy (navigate) or falsy (stay on the current route). Add the `Guard` to the crisis detail route in `crisis-center-routing.module.ts` using the `canDeactivate` array property. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisCenterHomeComponent } from './crisis-center-home/crisis-center-home.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { CrisisCenterComponent } from './crisis-center/crisis-center.component'; import { CrisisDetailComponent } from './crisis-detail/crisis-detail.component'; import { canDeactivateGuard } from '../can-deactivate.guard'; const crisisCenterRoutes: Routes = [ { path: 'crisis-center', component: CrisisCenterComponent, children: [ { path: '', component: CrisisListComponent, children: [ { path: ':id', component: CrisisDetailComponent, canDeactivate: [canDeactivateGuard] }, { path: '', component: CrisisCenterHomeComponent } ] } ] } ]; @NgModule({ imports: [ RouterModule.forChild(crisisCenterRoutes) ], exports: [ RouterModule ] }) export class CrisisCenterRoutingModule { } ``` Now you have given the user a safeguard against unsaved changes. ### `Resolve`: pre-fetching component data In the `Hero Detail` and `Crisis Detail`, the application waited until the route was activated to fetch the respective hero or crisis. If you were using a real world API, there might be some delay before the data to display is returned from the server. You don't want to display a blank component while waiting for the data. To improve this behavior, you can pre-fetch data from the server using a resolver so it's ready the moment the route is activated. This also lets you handle errors before routing to the component. There's no point in navigating to a crisis detail for an `id` that doesn't have a record. It'd be better to send the user back to the `Crisis List` that shows only valid crisis centers. In summary, you want to delay rendering the routed component until all necessary data has been fetched. #### Fetch data before navigating At the moment, the `CrisisDetailComponent` retrieves the selected crisis. If the crisis is not found, the router navigates back to the crisis list view. The experience might be better if all of this were handled first, before the route is activated. A `crisisDetailResolver` could retrieve a `Crisis` or navigate away, if the `Crisis` did not exist, *before* activating the route and creating the `CrisisDetailComponent`. Create a `crisis-detail-resolver.ts` file within the `Crisis Center` feature area. This file will contain the `crisisDetailResolver` function. ``` export function crisisDetailResolver() { } ``` Move the relevant parts of the crisis retrieval logic in `CrisisDetailComponent.ngOnInit()` into the `crisisDetailResolver`. Import the `Crisis` model, `CrisisService`, and the `[Router](../api/router/router)` so you can navigate elsewhere if you can't fetch the crisis. Be explicit and use the `[ResolveFn](../api/router/resolvefn)` type with a type of `Crisis`. Inject the `CrisisService` and `[Router](../api/router/router)`. That method could return a `Promise`, an `Observable`, or a synchronous return value. The `CrisisService.getCrisis()` method returns an observable in order to prevent the route from loading until the data is fetched. If it doesn't return a valid `Crisis`, then return an empty `Observable`, cancel the previous in-progress navigation to the `CrisisDetailComponent`, and navigate the user back to the `CrisisListComponent`. The updated resolver function looks like this: ``` import {inject} from '@angular/core'; import {ActivatedRouteSnapshot, ResolveFn, Router} from '@angular/router'; import {EMPTY, of} from 'rxjs'; import {mergeMap} from 'rxjs/operators'; import {Crisis} from './crisis'; import {CrisisService} from './crisis.service'; export const crisisDetailResolver: ResolveFn<Crisis> = (route: ActivatedRouteSnapshot) => { const router = inject(Router); const cs = inject(CrisisService); const id = route.paramMap.get('id')!; return cs.getCrisis(id).pipe(mergeMap(crisis => { if (crisis) { return of(crisis); } else { // id not found router.navigate(['/crisis-center']); return EMPTY; } })); }; ``` Import this resolver in the `crisis-center-routing.module.ts` and add a `resolve` object to the `CrisisDetailComponent` route configuration. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisCenterHomeComponent } from './crisis-center-home/crisis-center-home.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { CrisisCenterComponent } from './crisis-center/crisis-center.component'; import { CrisisDetailComponent } from './crisis-detail/crisis-detail.component'; import { canDeactivateGuard } from '../can-deactivate.guard'; import { crisisDetailResolver } from './crisis-detail-resolver'; const crisisCenterRoutes: Routes = [ { path: 'crisis-center', component: CrisisCenterComponent, children: [ { path: '', component: CrisisListComponent, children: [ { path: ':id', component: CrisisDetailComponent, canDeactivate: [canDeactivateGuard], resolve: { crisis: crisisDetailResolver } }, { path: '', component: CrisisCenterHomeComponent } ] } ] } ]; @NgModule({ imports: [ RouterModule.forChild(crisisCenterRoutes) ], exports: [ RouterModule ] }) export class CrisisCenterRoutingModule { } ``` The `CrisisDetailComponent` should no longer fetch the crisis. When you re-configured the route, you changed where the crisis is. Update the `CrisisDetailComponent` to get the crisis from the `ActivatedRoute.data.crisis` property instead; ``` ngOnInit() { this.route.data .subscribe(data => { const crisis: Crisis = data['crisis']; this.editName = crisis.name; this.crisis = crisis; }); } ``` Review the following three important points: 1. The router's `[ResolveFn](../api/router/resolvefn)` is optional. 2. The router calls the resolver in any case where the user could navigate away so you don't have to code for each use case. 3. Returning an empty `Observable` in at least one resolver cancels navigation. The relevant Crisis Center code for this milestone follows. ``` <div class="wrapper"> <h1 class="title">Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/superheroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> <a routerLink="/admin" routerLinkActive="active" ariaCurrentWhenActive="page">Admin</a> <a routerLink="/login" routerLinkActive="active" ariaCurrentWhenActive="page">Login</a> <a [routerLink]="[{ outlets: { popup: ['compose'] } }]">Contact</a> </nav> <div [@routeAnimation]="getRouteAnimationData()"> <router-outlet></router-outlet> </div> <router-outlet name="popup"></router-outlet> </div> ``` ``` <h3>Welcome to the Crisis Center</h3> ``` ``` <h2>Crisis Center</h2> <router-outlet></router-outlet> ``` ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisCenterHomeComponent } from './crisis-center-home/crisis-center-home.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { CrisisCenterComponent } from './crisis-center/crisis-center.component'; import { CrisisDetailComponent } from './crisis-detail/crisis-detail.component'; import { canDeactivateGuard } from '../can-deactivate.guard'; import { crisisDetailResolver } from './crisis-detail-resolver'; const crisisCenterRoutes: Routes = [ { path: 'crisis-center', component: CrisisCenterComponent, children: [ { path: '', component: CrisisListComponent, children: [ { path: ':id', component: CrisisDetailComponent, canDeactivate: [canDeactivateGuard], resolve: { crisis: crisisDetailResolver } }, { path: '', component: CrisisCenterHomeComponent } ] } ] } ]; @NgModule({ imports: [ RouterModule.forChild(crisisCenterRoutes) ], exports: [ RouterModule ] }) export class CrisisCenterRoutingModule { } ``` ``` <ul class="crises"> <li *ngFor="let crisis of crises$ | async" [class.selected]="crisis.id === selectedId"> <a [routerLink]="[crisis.id]"> <span class="badge">{{ crisis.id }}</span>{{ crisis.name }} </a> </li> </ul> <router-outlet></router-outlet> ``` ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { CrisisService } from '../crisis.service'; import { Crisis } from '../crisis'; import { Observable } from 'rxjs'; import { switchMap } from 'rxjs/operators'; @Component({ selector: 'app-crisis-list', templateUrl: './crisis-list.component.html', styleUrls: ['./crisis-list.component.css'] }) export class CrisisListComponent implements OnInit { crises$?: Observable<Crisis[]>; selectedId = 0; constructor( private service: CrisisService, private route: ActivatedRoute ) {} ngOnInit() { this.crises$ = this.route.firstChild?.paramMap.pipe( switchMap(params => { this.selectedId = parseInt(params.get('id')!, 10); return this.service.getCrises(); }) ); } } ``` ``` <div *ngIf="crisis"> <h3>{{ editName }}</h3> <p>Id: {{ crisis.id }}</p> <label for="crisis-name">Crisis name: </label> <input type="text" id="crisis-name" [(ngModel)]="editName" placeholder="name"/> <div> <button type="button" (click)="save()">Save</button> <button type="button" (click)="cancel()">Cancel</button> </div> </div> ``` ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute, Router } from '@angular/router'; import { Observable } from 'rxjs'; import { Crisis } from '../crisis'; import { DialogService } from '../../dialog.service'; @Component({ selector: 'app-crisis-detail', templateUrl: './crisis-detail.component.html', styleUrls: ['./crisis-detail.component.css'] }) export class CrisisDetailComponent implements OnInit { crisis!: Crisis; editName = ''; constructor( private route: ActivatedRoute, private router: Router, public dialogService: DialogService ) {} ngOnInit() { this.route.data .subscribe(data => { const crisis: Crisis = data['crisis']; this.editName = crisis.name; this.crisis = crisis; }); } cancel() { this.gotoCrises(); } save() { this.crisis.name = this.editName; this.gotoCrises(); } canDeactivate(): Observable<boolean> | boolean { // Allow synchronous navigation (`true`) if no crisis or the crisis is unchanged if (!this.crisis || this.crisis.name === this.editName) { return true; } // Otherwise ask the user with the dialog service and return its // observable which resolves to true or false when the user decides return this.dialogService.confirm('Discard changes?'); } gotoCrises() { const crisisId = this.crisis ? this.crisis.id : null; // Pass along the crisis id if available // so that the CrisisListComponent can select that crisis. // Add a totally useless `foo` parameter for kicks. // Relative navigation back to the crises this.router.navigate(['../', { id: crisisId, foo: 'foo' }], { relativeTo: this.route }); } } ``` ``` import {inject} from '@angular/core'; import {ActivatedRouteSnapshot, ResolveFn, Router} from '@angular/router'; import {EMPTY, of} from 'rxjs'; import {mergeMap} from 'rxjs/operators'; import {Crisis} from './crisis'; import {CrisisService} from './crisis.service'; export const crisisDetailResolver: ResolveFn<Crisis> = (route: ActivatedRouteSnapshot) => { const router = inject(Router); const cs = inject(CrisisService); const id = route.paramMap.get('id')!; return cs.getCrisis(id).pipe(mergeMap(crisis => { if (crisis) { return of(crisis); } else { // id not found router.navigate(['/crisis-center']); return EMPTY; } })); }; ``` ``` import { BehaviorSubject } from 'rxjs'; import { map } from 'rxjs/operators'; import { Injectable } from '@angular/core'; import { MessageService } from '../message.service'; import { Crisis } from './crisis'; import { CRISES } from './mock-crises'; @Injectable({ providedIn: 'root', }) export class CrisisService { static nextCrisisId = 100; private crises$: BehaviorSubject<Crisis[]> = new BehaviorSubject<Crisis[]>(CRISES); constructor(private messageService: MessageService) { } getCrises() { return this.crises$; } getCrisis(id: number | string) { return this.getCrises().pipe( map(crises => crises.find(crisis => crisis.id === +id)!) ); } } ``` ``` import { Injectable } from '@angular/core'; import { Observable, of } from 'rxjs'; /** * Async modal dialog service * DialogService makes this app easier to test by faking this service. * TODO: better modal implementation that doesn't use window.confirm */ @Injectable({ providedIn: 'root', }) export class DialogService { /** * Ask user to confirm an action. `message` explains the action and choices. * Returns observable resolving to `true`=confirm or `false`=cancel */ confirm(message?: string): Observable<boolean> { const confirmation = window.confirm(message || 'Is it OK?'); return of(confirmation); } } ``` Guards ``` import { inject } from '@angular/core'; import { Router } from '@angular/router'; import { AuthService } from './auth.service'; export const authGuard = () => { const authService = inject(AuthService); const router = inject(Router); if (authService.isLoggedIn) { return true; } // Redirect to the login page return router.parseUrl('/login'); }; ``` ``` import { CanDeactivateFn } from '@angular/router'; import { Observable } from 'rxjs'; export interface CanComponentDeactivate { canDeactivate?: () => Observable<boolean> | Promise<boolean> | boolean; } export const canDeactivateGuard: CanDeactivateFn<CanComponentDeactivate> = (component: CanComponentDeactivate) => component.canDeactivate ? component.canDeactivate() : true; ``` ### Query parameters and fragments In the [route parameters](router-tutorial-toh#optional-route-parameters) section, you only dealt with parameters specific to the route. However, you can use query parameters to get optional parameters available to all routes. [Fragments](https://en.wikipedia.org/wiki/Fragment_identifier) refer to certain elements on the page identified with an `id` attribute. Update the `authGuard` to provide a `session_id` query that remains after navigating to another route. Add an `anchor` element so you can jump to a certain point on the page. Add the `[NavigationExtras](../api/router/navigationextras)` object to the `router.navigate()` method that navigates you to the `/login` route. ``` import { inject } from '@angular/core'; import { Router, NavigationExtras } from '@angular/router'; import { AuthService } from './auth.service'; export const authGuard = () => { const authService = inject(AuthService); const router = inject(Router); if (authService.isLoggedIn) { return true; } // Create a dummy session id const sessionId = 123456789; // Set our navigation extras object // that contains our global query params and fragment const navigationExtras: NavigationExtras = { queryParams: { session_id: sessionId }, fragment: 'anchor' }; // Redirect to the login page with extras return router.createUrlTree(['/login'], navigationExtras); }; ``` You can also preserve query parameters and fragments across navigations without having to provide them again when navigating. In the `LoginComponent`, you'll add an *object* as the second argument in the `router.navigate()` function and provide the `queryParamsHandling` and `preserveFragment` to pass along the current query parameters and fragment to the next route. ``` // Set our navigation extras object // that passes on our global query params and fragment const navigationExtras: NavigationExtras = { queryParamsHandling: 'preserve', preserveFragment: true }; // Redirect the user this.router.navigate([redirectUrl], navigationExtras); ``` > The `queryParamsHandling` feature also provides a `merge` option, which preserves and combines the current query parameters with any provided query parameters when navigating. > > To navigate to the Admin Dashboard route after logging in, update `admin-dashboard.component.ts` to handle the query parameters and fragment. ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { Observable } from 'rxjs'; import { map } from 'rxjs/operators'; @Component({ selector: 'app-admin-dashboard', templateUrl: './admin-dashboard.component.html', styleUrls: ['./admin-dashboard.component.css'] }) export class AdminDashboardComponent implements OnInit { sessionId!: Observable<string>; token!: Observable<string>; constructor(private route: ActivatedRoute) {} ngOnInit() { // Capture the session ID if available this.sessionId = this.route .queryParamMap .pipe(map(params => params.get('session_id') || 'None')); // Capture the fragment if available this.token = this.route .fragment .pipe(map(fragment => fragment || 'None')); } } ``` Query parameters and fragments are also available through the `[ActivatedRoute](../api/router/activatedroute)` service. Like route parameters, the query parameters and fragments are provided as an `Observable`. The updated Crisis Admin component feeds the `Observable` directly into the template using the `[AsyncPipe](../api/common/asyncpipe)`. Now, you can click on the Admin button, which takes you to the Login page with the provided `queryParamMap` and `fragment`. After you click the login button, notice that you have been redirected to the `Admin Dashboard` page with the query parameters and fragment still intact in the address bar. You can use these persistent bits of information for things that need to be provided across pages like authentication tokens or session ids. > The `[query](../api/animations/query) params` and `fragment` can also be preserved using a `[RouterLink](../api/router/routerlink)` with the `queryParamsHandling` and `preserveFragment` bindings respectively. > > Milestone 6: Asynchronous routing --------------------------------- As you've worked through the milestones, the application has naturally gotten larger. At some point you'll reach a point where the application takes a long time to load. To remedy this issue, use asynchronous routing, which loads feature modules lazily, on request. Lazy loading has multiple benefits. * You can load feature areas only when requested by the user * You can speed up load time for users that only visit certain areas of the application * You can continue expanding lazy loaded feature areas without increasing the size of the initial load bundle You're already part of the way there. By organizing the application into modules —`AppModule`, `HeroesModule`, `AdminModule`, and `CrisisCenterModule`— you have natural candidates for lazy loading. Some modules, like `AppModule`, must be loaded from the start. But others can and should be lazy loaded. The `AdminModule`, for example, is needed by a few authorized users, so you should only load it when requested by the right people. ### Lazy Loading route configuration Change the `admin` path in the `admin-routing.module.ts` from `'admin'` to an empty string, `''`, the empty path. Use empty path routes to group routes together without adding any additional path segments to the URL. Users will still visit `/admin` and the `AdminComponent` still serves as the Routing Component containing child routes. Open the `AppRoutingModule` and add a new `admin` route to its `appRoutes` array. Give it a `loadChildren` property instead of a `children` property. The `loadChildren` property takes a function that returns a promise using the browser's built-in syntax for lazy loading code using dynamic imports `import('...')`. The path is the location of the `AdminModule` (relative to the application root). After the code is requested and loaded, the `Promise` resolves an object that contains the `[NgModule](../api/core/ngmodule)`, in this case the `AdminModule`. ``` { path: 'admin', loadChildren: () => import('./admin/admin.module').then(m => m.AdminModule), }, ``` > **NOTE**: When using absolute paths, the `[NgModule](../api/core/ngmodule)` file location must begin with `src/app` in order to resolve correctly. For custom [path mapping with absolute paths](https://www.typescriptlang.org/docs/handbook/module-resolution.html#path-mapping), you must configure the `baseUrl` and `paths` properties in the project `tsconfig.json`. > > When the router navigates to this route, it uses the `loadChildren` string to dynamically load the `AdminModule`. Then it adds the `AdminModule` routes to its current route configuration. Finally, it loads the requested route to the destination admin component. The lazy loading and re-configuration happen just once, when the route is first requested; the module and routes are available immediately for subsequent requests. Take the final step and detach the admin feature set from the main application. The root `AppModule` must neither load nor reference the `AdminModule` or its files. In `app.module.ts`, remove the `AdminModule` import statement from the top of the file and remove the `AdminModule` from the NgModule's `imports` array. ### `canMatch`: guarding unauthorized access of feature modules You're already protecting the `AdminModule` with a `canActivate` guard that prevents unauthorized users from accessing the admin feature area. It redirects to the login page if the user is not authorized. But the router is still loading the `AdminModule` even if the user can't visit any of its components. Ideally, you'd only load the `AdminModule` if the user is logged in. A `canMatch` guard controls whether the `[Router](../api/router/router)` attempts to match a `[Route](../api/router/route)`. This lets you have multiple `[Route](../api/router/route)` configurations that share the same `path` but are matched based on different conditions. This approach allows the `[Router](../api/router/router)` to match the wildcard `[Route](../api/router/route)` instead. The existing `authGuard` contains the logic to support the `canMatch` guard. Finally, add the `authGuard` to the `canMatch` array property for the `admin` route. The completed admin route looks like this: ``` { path: 'admin', loadChildren: () => import('./admin/admin.module').then(m => m.AdminModule), canMatch: [authGuard] }, ``` ### Preloading: background loading of feature areas In addition to loading modules on-demand, you can load modules asynchronously with preloading. The `AppModule` is eagerly loaded when the application starts, meaning that it loads right away. Now the `AdminModule` loads only when the user clicks on a link, which is called lazy loading. Preloading lets you load modules in the background so that the data is ready to render when the user activates a particular route. Consider the Crisis Center. It isn't the first view that a user sees. By default, the Heroes are the first view. For the smallest initial payload and fastest launch time, you should eagerly load the `AppModule` and the `HeroesModule`. You could lazy load the Crisis Center. But you're almost certain that the user will visit the Crisis Center within minutes of launching the app. Ideally, the application would launch with just the `AppModule` and the `HeroesModule` loaded and then, almost immediately, load the `CrisisCenterModule` in the background. By the time the user navigates to the Crisis Center, its module is loaded and ready. #### How preloading works After each successful navigation, the router looks in its configuration for an unloaded module that it can preload. Whether it preloads a module, and which modules it preloads, depends upon the preload strategy. The `[Router](../api/router/router)` offers two preloading strategies: | Strategies | Details | | --- | --- | | No preloading | The default. Lazy loaded feature areas are still loaded on-demand. | | Preloading | All lazy loaded feature areas are preloaded. | The router either never preloads, or preloads every lazy loaded module. The `[Router](../api/router/router)` also supports [custom preloading strategies](router-tutorial-toh#custom-preloading) for fine control over which modules to preload and when. This section guides you through updating the `CrisisCenterModule` to load lazily by default and use the `[PreloadAllModules](../api/router/preloadallmodules)` strategy to load all lazy loaded modules. #### Lazy load the crisis center Update the route configuration to lazy load the `CrisisCenterModule`. Take the same steps you used to configure `AdminModule` for lazy loading. 1. Change the `crisis-center` path in the `CrisisCenterRoutingModule` to an empty string. 2. Add a `crisis-center` route to the `AppRoutingModule`. 3. Set the `loadChildren` string to load the `CrisisCenterModule`. 4. Remove all mention of the `CrisisCenterModule` from `app.module.ts`. Here are the updated modules *before enabling preload*: ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { Router } from '@angular/router'; import { AppComponent } from './app.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; import { ComposeMessageComponent } from './compose-message/compose-message.component'; import { AppRoutingModule } from './app-routing.module'; import { HeroesModule } from './heroes/heroes.module'; import { AuthModule } from './auth/auth.module'; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, FormsModule, HeroesModule, AuthModule, AppRoutingModule, ], declarations: [ AppComponent, ComposeMessageComponent, PageNotFoundComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes, } from '@angular/router'; import { ComposeMessageComponent } from './compose-message/compose-message.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; import { authGuard } from './auth/auth.guard'; const appRoutes: Routes = [ { path: 'compose', component: ComposeMessageComponent, outlet: 'popup' }, { path: 'admin', loadChildren: () => import('./admin/admin.module').then(m => m.AdminModule), canMatch: [authGuard] }, { path: 'crisis-center', loadChildren: () => import('./crisis-center/crisis-center.module').then(m => m.CrisisCenterModule) }, { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot( appRoutes, ) ], exports: [ RouterModule ] }) export class AppRoutingModule {} ``` ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { CrisisCenterHomeComponent } from './crisis-center-home/crisis-center-home.component'; import { CrisisListComponent } from './crisis-list/crisis-list.component'; import { CrisisCenterComponent } from './crisis-center/crisis-center.component'; import { CrisisDetailComponent } from './crisis-detail/crisis-detail.component'; import { canDeactivateGuard } from '../can-deactivate.guard'; import { crisisDetailResolver } from './crisis-detail-resolver'; const crisisCenterRoutes: Routes = [ { path: '', component: CrisisCenterComponent, children: [ { path: '', component: CrisisListComponent, children: [ { path: ':id', component: CrisisDetailComponent, canDeactivate: [canDeactivateGuard], resolve: { crisis: crisisDetailResolver } }, { path: '', component: CrisisCenterHomeComponent } ] } ] } ]; @NgModule({ imports: [ RouterModule.forChild(crisisCenterRoutes) ], exports: [ RouterModule ] }) export class CrisisCenterRoutingModule { } ``` You could try this now and confirm that the `CrisisCenterModule` loads after you click the "Crisis Center" button. To enable preloading of all lazy loaded modules, import the `[PreloadAllModules](../api/router/preloadallmodules)` token from the Angular router package. The second argument in the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method takes an object for additional configuration options. The `preloadingStrategy` is one of those options. Add the `[PreloadAllModules](../api/router/preloadallmodules)` token to the `forRoot()` call: ``` RouterModule.forRoot( appRoutes, { enableTracing: true, // <-- debugging purposes only preloadingStrategy: PreloadAllModules } ) ``` This configures the `[Router](../api/router/router)` preloader to immediately load all lazy loaded routes (routes with a `loadChildren` property). When you visit `http://localhost:4200`, the `/heroes` route loads immediately upon launch and the router starts loading the `CrisisCenterModule` right after the `HeroesModule` loads. ### Custom Preloading Strategy Preloading every lazy loaded module works well in many situations. However, in consideration of things such as low bandwidth and user metrics, you can use a custom preloading strategy for specific feature modules. This section guides you through adding a custom strategy that only preloads routes whose `data.preload` flag is set to `true`. Recall that you can add anything to the `data` property of a route. Set the `data.preload` flag in the `crisis-center` route in the `AppRoutingModule`. ``` { path: 'crisis-center', loadChildren: () => import('./crisis-center/crisis-center.module').then(m => m.CrisisCenterModule), data: { preload: true } }, ``` Generate a new `SelectivePreloadingStrategy` service. ``` ng generate service selective-preloading-strategy ``` Replace the contents of `selective-preloading-strategy.service.ts` with the following: ``` import { Injectable } from '@angular/core'; import { PreloadingStrategy, Route } from '@angular/router'; import { Observable, of } from 'rxjs'; @Injectable({ providedIn: 'root', }) export class SelectivePreloadingStrategyService implements PreloadingStrategy { preloadedModules: string[] = []; preload(route: Route, load: () => Observable<any>): Observable<any> { if (route.canMatch === undefined && route.data?.['preload'] && route.path != null) { // add the route path to the preloaded module array this.preloadedModules.push(route.path); // log the route path to the console console.log('Preloaded: ' + route.path); return load(); } else { return of(null); } } } ``` `SelectivePreloadingStrategyService` implements the `[PreloadingStrategy](../api/router/preloadingstrategy)`, which has one method, `preload()`. The router calls the `preload()` method with two arguments: 1. The route to consider. 2. A loader function that can load the routed module asynchronously. An implementation of `preload` must return an `Observable`. If the route does preload, it returns the observable returned by calling the loader function. If the route does not preload, it returns an `Observable` of `null`. In this sample, the `preload()` method loads the route if the route's `data.preload` flag is truthy. We also skip loading the `[Route](../api/router/route)` if there is a `canMatch` guard because the user might not have access to it. As a side effect, `SelectivePreloadingStrategyService` logs the `path` of a selected route in its public `preloadedModules` array. Shortly, you'll extend the `AdminDashboardComponent` to inject this service and display its `preloadedModules` array. But first, make a few changes to the `AppRoutingModule`. 1. Import `SelectivePreloadingStrategyService` into `AppRoutingModule`. 2. Replace the `[PreloadAllModules](../api/router/preloadallmodules)` strategy in the call to `forRoot()` with this `SelectivePreloadingStrategyService`. Now edit the `AdminDashboardComponent` to display the log of preloaded routes. 1. Import the `SelectivePreloadingStrategyService`. 2. Inject it into the dashboard's constructor. 3. Update the template to display the strategy service's `preloadedModules` array. Now the file is as follows: ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { Observable } from 'rxjs'; import { map } from 'rxjs/operators'; import { SelectivePreloadingStrategyService } from '../../selective-preloading-strategy.service'; @Component({ selector: 'app-admin-dashboard', templateUrl: './admin-dashboard.component.html', styleUrls: ['./admin-dashboard.component.css'] }) export class AdminDashboardComponent implements OnInit { sessionId!: Observable<string>; token!: Observable<string>; modules: string[] = []; constructor( private route: ActivatedRoute, preloadStrategy: SelectivePreloadingStrategyService ) { this.modules = preloadStrategy.preloadedModules; } ngOnInit() { // Capture the session ID if available this.sessionId = this.route .queryParamMap .pipe(map(params => params.get('session_id') || 'None')); // Capture the fragment if available this.token = this.route .fragment .pipe(map(fragment => fragment || 'None')); } } ``` Once the application loads the initial route, the `CrisisCenterModule` is preloaded. Verify this by logging in to the `Admin` feature area and noting that the `crisis-center` is listed in the `Preloaded Modules`. It also logs to the browser's console. ### Migrating URLs with redirects You've set up the routes for navigating around your application and used navigation imperatively and declaratively. But like any application, requirements change over time. You've setup links and navigation to `/heroes` and `/hero/:id` from the `HeroListComponent` and `HeroDetailComponent` components. If there were a requirement that links to `heroes` become `superheroes`, you would still want the previous URLs to navigate correctly. You also don't want to update every link in your application, so redirects makes refactoring routes trivial. #### Changing `/heroes` to `/superheroes` This section guides you through migrating the `Hero` routes to new URLs. The `[Router](../api/router/router)` checks for redirects in your configuration before navigating, so each redirect is triggered when needed. To support this change, add redirects from the old routes to the new routes in the `heroes-routing.module`. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { HeroListComponent } from './hero-list/hero-list.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; const heroesRoutes: Routes = [ { path: 'heroes', redirectTo: '/superheroes' }, { path: 'hero/:id', redirectTo: '/superhero/:id' }, { path: 'superheroes', component: HeroListComponent, data: { animation: 'heroes' } }, { path: 'superhero/:id', component: HeroDetailComponent, data: { animation: 'hero' } } ]; @NgModule({ imports: [ RouterModule.forChild(heroesRoutes) ], exports: [ RouterModule ] }) export class HeroesRoutingModule { } ``` Notice two different types of redirects. The first change is from `/heroes` to `/superheroes` without any parameters. The second change is from `/hero/:id` to `/superhero/:id`, which includes the `:id` route parameter. Router redirects also use powerful pattern-matching, so the `[Router](../api/router/router)` inspects the URL and replaces route parameters in the `path` with their appropriate destination. Previously, you navigated to a URL such as `/hero/15` with a route parameter `id` of `15`. > The `[Router](../api/router/router)` also supports [query parameters](router-tutorial-toh#query-parameters) and the [fragment](router-tutorial-toh#fragment) when using redirects. > > * When using absolute redirects, the `[Router](../api/router/router)` uses the query parameters and the fragment from the `redirectTo` in the route config > * When using relative redirects, the `[Router](../api/router/router)` use the query params and the fragment from the source URL > > Currently, the empty path route redirects to `/heroes`, which redirects to `/superheroes`. This won't work because the `[Router](../api/router/router)` handles redirects once at each level of routing configuration. This prevents chaining of redirects, which can lead to endless redirect loops. Instead, update the empty path route in `app-routing.module.ts` to redirect to `/superheroes`. ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { ComposeMessageComponent } from './compose-message/compose-message.component'; import { PageNotFoundComponent } from './page-not-found/page-not-found.component'; import { authGuard } from './auth/auth.guard'; import { SelectivePreloadingStrategyService } from './selective-preloading-strategy.service'; const appRoutes: Routes = [ { path: 'compose', component: ComposeMessageComponent, outlet: 'popup' }, { path: 'admin', loadChildren: () => import('./admin/admin.module').then(m => m.AdminModule), canMatch: [authGuard] }, { path: 'crisis-center', loadChildren: () => import('./crisis-center/crisis-center.module').then(m => m.CrisisCenterModule), data: { preload: true } }, { path: '', redirectTo: '/superheroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot( appRoutes, { enableTracing: false, // <-- debugging purposes only preloadingStrategy: SelectivePreloadingStrategyService, } ) ], exports: [ RouterModule ] }) export class AppRoutingModule { } ``` A `[routerLink](../api/router/routerlink)` isn't tied to route configuration, so update the associated router links to remain active when the new route is active. Update the `app.component.ts` template for the `/heroes` `[routerLink](../api/router/routerlink)`. ``` <div class="wrapper"> <h1 class="title">Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/superheroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> <a routerLink="/admin" routerLinkActive="active" ariaCurrentWhenActive="page">Admin</a> <a routerLink="/login" routerLinkActive="active" ariaCurrentWhenActive="page">Login</a> <a [routerLink]="[{ outlets: { popup: ['compose'] } }]">Contact</a> </nav> <div [@routeAnimation]="getRouteAnimationData()"> <router-outlet></router-outlet> </div> <router-outlet name="popup"></router-outlet> </div> ``` Update the `goToHeroes()` method in the `hero-detail.component.ts` to navigate back to `/superheroes` with the optional route parameters. ``` gotoHeroes(hero: Hero) { const heroId = hero ? hero.id : null; // Pass along the hero id if available // so that the HeroList component can select that hero. // Include a junk 'foo' property for fun. this.router.navigate(['/superheroes', { id: heroId, foo: 'foo' }]); } ``` With the redirects setup, all previous routes now point to their new destinations and both URLs still function as intended. ### Inspect the router's configuration To determine if your routes are actually evaluated [in the proper order](router-tutorial-toh#routing-module-order), you can inspect the router's configuration. Do this by injecting the router and logging to the console its `config` property. For example, update the `AppModule` as follows and look in the browser console window to see the finished route configuration. ``` export class AppModule { // Diagnostic only: inspect router configuration constructor(router: Router) { // Use a custom replacer to display function names in the route configs const replacer = (key, value) => (typeof value === 'function') ? value.name : value; console.log('Routes: ', JSON.stringify(router.config, replacer, 2)); } } ``` Final application ----------------- For the completed router application, see the for the final source code. Last reviewed on Mon Feb 28 2022
programming_docs
angular Using observables to pass values Using observables to pass values ================================ Observables provide support for passing messages between parts of your application. They are used frequently in Angular and are a technique for event handling, asynchronous programming, and handling multiple values. The observer pattern is a software design pattern in which an object, called the *subject*, maintains a list of its dependents, called *observers*, and notifies them automatically of state changes. This pattern is similar (but not identical) to the [publish/subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) design pattern. Observables are declarative —that is, you define a function for publishing values, but it is not executed until a consumer subscribes to it. The subscribed consumer then receives notifications until the function completes, or until they unsubscribe. An observable can deliver multiple values of any type —literals, messages, or events, depending on the context. The API for receiving values is the same whether the values are delivered synchronously or asynchronously. Because setup and teardown logic are both handled by the observable, your application code only needs to worry about subscribing to consume values, and when done, unsubscribing. Whether the stream was keystrokes, an HTTP response, or an interval timer, the interface for listening to values and stopping listening is the same. Because of these advantages, observables are used extensively within Angular, and for application development as well. Basic usage and terms --------------------- As a publisher, you create an `Observable` instance that defines a *subscriber* function. This is the function that is executed when a consumer calls the `subscribe()` method. The subscriber function defines how to obtain or generate values or messages to be published. To execute the observable you have created and begin receiving notifications, you call its `subscribe()` method, passing an *observer*. This is a JavaScript object that defines the handlers for the notifications you receive. The `subscribe()` call returns a `Subscription` object that has an `unsubscribe()` method, which you call to stop receiving notifications. Here's an example that demonstrates the basic usage model by showing how an observable could be used to provide geolocation updates. ``` // Create an Observable that will start listening to geolocation updates // when a consumer subscribes. const locations = new Observable((observer) => { let watchId: number; // Simple geolocation API check provides values to publish if ('geolocation' in navigator) { watchId = navigator.geolocation.watchPosition((position: GeolocationPosition) => { observer.next(position); }, (error: GeolocationPositionError) => { observer.error(error); }); } else { observer.error('Geolocation not available'); } // When the consumer unsubscribes, clean up data ready for next subscription. return { unsubscribe() { navigator.geolocation.clearWatch(watchId); } }; }); // Call subscribe() to start listening for updates. const locationsSubscription = locations.subscribe({ next(position) { console.log('Current Position: ', position); }, error(msg) { console.log('Error Getting Location: ', msg); } }); // Stop listening for location after 10 seconds setTimeout(() => { locationsSubscription.unsubscribe(); }, 10000); ``` Defining observers ------------------ A handler for receiving observable notifications implements the `Observer` interface. It is an object that defines callback methods to handle the three types of notifications that an observable can send: | Notification type | Details | | --- | --- | | `next` | Required. A handler for each delivered value. Called zero or more times after execution starts. | | `error` | Optional. A handler for an error notification. An error halts execution of the observable instance. | | `complete` | Optional. A handler for the execution-complete notification. Delayed values can continue to be delivered to the next handler after execution is complete. | An observer object can define any combination of these handlers. If you don't supply a handler for a notification type, the observer ignores notifications of that type. Subscribing ----------- An `Observable` instance begins publishing values only when someone subscribes to it. You subscribe by calling the `subscribe()` method of the instance, passing an observer object to receive the notifications. > In order to show how subscribing works, we need to create a new observable. There is a constructor that you use to create new instances, but for illustration, we can use some methods from the RxJS library that create simple observables of frequently used types: > > > > | RxJS methods | Details | > | --- | --- | > | `of(...items)` | Returns an `Observable` instance that synchronously delivers the values provided as arguments. | > | `from(iterable)` | Converts its argument to an `Observable` instance. This method is commonly used to convert an array to an observable. | > > Here's an example of creating and subscribing to a simple observable, with an observer that logs the received message to the console: ``` // Create simple observable that emits three values const myObservable = of(1, 2, 3); // Create observer object const myObserver = { next: (x: number) => console.log('Observer got a next value: ' + x), error: (err: Error) => console.error('Observer got an error: ' + err), complete: () => console.log('Observer got a complete notification'), }; // Execute with the observer object myObservable.subscribe(myObserver); // Logs: // Observer got a next value: 1 // Observer got a next value: 2 // Observer got a next value: 3 // Observer got a complete notification ``` Alternatively, the `subscribe()` method can accept callback function definitions in line, for `next`, `error`, and `complete` handlers. For example, the following `subscribe()` call is the same as the one that specifies the predefined observer: ``` myObservable.subscribe( x => console.log('Observer got a next value: ' + x), err => console.error('Observer got an error: ' + err), () => console.log('Observer got a complete notification') ); ``` In either case, a `next` handler is required. The `error` and `complete` handlers are optional. > **NOTE**: A `next()` function could receive, for instance, message strings, or event objects, numeric values, or structures, depending on context. As a general term, we refer to data published by an observable as a *stream*. Any type of value can be represented with an observable, and the values are published as a stream. > > Creating observables -------------------- Use the `Observable` constructor to create an observable stream of any type. The constructor takes as its argument the subscriber function to run when the observable's `subscribe()` method executes. A subscriber function receives an `Observer` object, and can publish values to the observer's `next()` method. For example, to create an observable equivalent to the `of(1, 2, 3)` above, you could do something like this: ``` // This function runs when subscribe() is called function sequenceSubscriber(observer: Observer<number>) { // synchronously deliver 1, 2, and 3, then complete observer.next(1); observer.next(2); observer.next(3); observer.complete(); // unsubscribe function doesn't need to do anything in this // because values are delivered synchronously return {unsubscribe() {}}; } // Create a new Observable that will deliver the above sequence const sequence = new Observable(sequenceSubscriber); // execute the Observable and print the result of each notification sequence.subscribe({ next(num) { console.log(num); }, complete() { console.log('Finished sequence'); } }); // Logs: // 1 // 2 // 3 // Finished sequence ``` To take this example a little further, we can create an observable that publishes events. In this example, the subscriber function is defined inline. ``` function fromEvent<T extends keyof HTMLElementEventMap>(target: HTMLElement, eventName: T) { return new Observable<HTMLElementEventMap[T]>((observer) => { const handler = (e: HTMLElementEventMap[T]) => observer.next(e); // Add the event handler to the target target.addEventListener(eventName, handler); return () => { // Detach the event handler from the target target.removeEventListener(eventName, handler); }; }); } ``` Now you can use this function to create an observable that publishes keydown events: ``` const ESC_CODE = 'Escape'; const nameInput = document.getElementById('name') as HTMLInputElement; const subscription = fromEvent(nameInput, 'keydown').subscribe((e: KeyboardEvent) => { if (e.code === ESC_CODE) { nameInput.value = ''; } }); ``` Multicasting ------------ A typical observable creates a new, independent execution for each subscribed observer. When an observer subscribes, the observable wires up an event handler and delivers values to that observer. When a second observer subscribes, the observable then wires up a new event handler and delivers values to that second observer in a separate execution. Sometimes, instead of starting an independent execution for each subscriber, you want each subscription to get the same values —even if values have already started emitting. This might be the case with something like an observable of clicks on the document object. *Multicasting* is the practice of broadcasting to a list of multiple subscribers in a single execution. With a multicasting observable, you don't register multiple listeners on the document, but instead re-use the first listener and send values out to each subscriber. When creating an observable you should determine how you want that observable to be used and whether or not you want to multicast its values. Let's look at an example that counts from 1 to 3, with a one-second delay after each number emitted. ``` function sequenceSubscriber(observer: Observer<number>) { const seq = [1, 2, 3]; let timeoutId: any; // Will run through an array of numbers, emitting one value // per second until it gets to the end of the array. function doInSequence(arr: number[], idx: number) { timeoutId = setTimeout(() => { observer.next(arr[idx]); if (idx === arr.length - 1) { observer.complete(); } else { doInSequence(arr, ++idx); } }, 1000); } doInSequence(seq, 0); // Unsubscribe should clear the timeout to stop execution return { unsubscribe() { clearTimeout(timeoutId); } }; } // Create a new Observable that will deliver the above sequence const sequence = new Observable(sequenceSubscriber); sequence.subscribe({ next(num) { console.log(num); }, complete() { console.log('Finished sequence'); } }); // Logs: // (at 1 second): 1 // (at 2 seconds): 2 // (at 3 seconds): 3 // (at 3 seconds): Finished sequence ``` Notice that if you subscribe twice, there will be two separate streams, each emitting values every second. It looks something like this: ``` // Subscribe starts the clock, and will emit after 1 second sequence.subscribe({ next(num) { console.log('1st subscribe: ' + num); }, complete() { console.log('1st sequence finished.'); } }); // After 1/2 second, subscribe again. setTimeout(() => { sequence.subscribe({ next(num) { console.log('2nd subscribe: ' + num); }, complete() { console.log('2nd sequence finished.'); } }); }, 500); // Logs: // (at 1 second): 1st subscribe: 1 // (at 1.5 seconds): 2nd subscribe: 1 // (at 2 seconds): 1st subscribe: 2 // (at 2.5 seconds): 2nd subscribe: 2 // (at 3 seconds): 1st subscribe: 3 // (at 3 seconds): 1st sequence finished // (at 3.5 seconds): 2nd subscribe: 3 // (at 3.5 seconds): 2nd sequence finished ``` Changing the observable to be multicasting could look something like this: ``` function multicastSequenceSubscriber() { const seq = [1, 2, 3]; // Keep track of each observer (one for every active subscription) const observers: Observer<unknown>[] = []; // Still a single timeoutId because there will only ever be one // set of values being generated, multicasted to each subscriber let timeoutId: any; // Return the subscriber function (runs when subscribe() // function is invoked) return (observer: Observer<unknown>) => { observers.push(observer); // When this is the first subscription, start the sequence if (observers.length === 1) { const multicastObserver: Observer<number> = { next(val) { // Iterate through observers and notify all subscriptions observers.forEach(obs => obs.next(val)); }, error() { /* Handle the error... */ }, complete() { // Notify all complete callbacks observers.slice(0).forEach(obs => obs.complete()); } }; doSequence(multicastObserver, seq, 0); } return { unsubscribe() { // Remove from the observers array so it's no longer notified observers.splice(observers.indexOf(observer), 1); // If there's no more listeners, do cleanup if (observers.length === 0) { clearTimeout(timeoutId); } } }; // Run through an array of numbers, emitting one value // per second until it gets to the end of the array. function doSequence(sequenceObserver: Observer<number>, arr: number[], idx: number) { timeoutId = setTimeout(() => { console.log('Emitting ' + arr[idx]); sequenceObserver.next(arr[idx]); if (idx === arr.length - 1) { sequenceObserver.complete(); } else { doSequence(sequenceObserver, arr, ++idx); } }, 1000); } }; } // Create a new Observable that will deliver the above sequence const multicastSequence = new Observable(multicastSequenceSubscriber()); // Subscribe starts the clock, and begins to emit after 1 second multicastSequence.subscribe({ next(num) { console.log('1st subscribe: ' + num); }, complete() { console.log('1st sequence finished.'); } }); // After 1 1/2 seconds, subscribe again (should "miss" the first value). setTimeout(() => { multicastSequence.subscribe({ next(num) { console.log('2nd subscribe: ' + num); }, complete() { console.log('2nd sequence finished.'); } }); }, 1500); // Logs: // (at 1 second): Emitting 1 // (at 1 second): 1st subscribe: 1 // (at 2 seconds): Emitting 2 // (at 2 seconds): 1st subscribe: 2 // (at 2 seconds): 2nd subscribe: 2 // (at 3 seconds): Emitting 3 // (at 3 seconds): 1st subscribe: 3 // (at 3 seconds): 2nd subscribe: 3 // (at 3 seconds): 1st sequence finished // (at 3 seconds): 2nd sequence finished ``` > Multicasting observables take a bit more setup, but they can be useful for certain applications. Later we will look at tools that simplify the process of multicasting, allowing you to take any observable and make it multicasting. > > Error handling -------------- Because observables produce values asynchronously, try/catch will not effectively catch errors. Instead, you handle errors by specifying an `error` callback on the observer. Producing an error also causes the observable to clean up subscriptions and stop producing values. An observable can either produce values (calling the `next` callback), or it can complete, calling either the `complete` or `error` callback. ``` myObservable.subscribe({ next(num) { console.log('Next num: ' + num)}, error(err) { console.log('Received an error: ' + err)} }); ``` Error handling (and specifically recovering from an error) is covered in more detail in a later section. Last reviewed on Mon Feb 28 2022 angular Skipping component subtrees Skipping component subtrees =========================== JavaScript, by default, uses mutable data structures that you can reference from multiple different components. Angular runs change detection over your entire component tree to make sure that the most up-to-date state of your data structures is reflected in the DOM. Change detection is sufficiently fast for most applications. However, when an application has an especially large component tree, running change detection across the whole application can cause performance issues. You can address this by configuring change detection to only run on a subset of the component tree. If you are confident that a part of the application is not affected by a state change, you can use [OnPush](../api/core/changedetectionstrategy) to skip change detection in an entire component subtree. Using `OnPush` -------------- OnPush change detection instructs Angular to run change detection for a component subtree **only** when: * The root component of the subtree receives new inputs as the result of a template binding. Angular compares the current and past value of the input with `==` * Angular handles an event *(for example using event binding, output binding, or `@[HostListener](../api/core/hostlistener)` )* in the subtree's root component or any of its children whether they are using OnPush change detection or not. You can set the change detection strategy of a component to `OnPush` in the `@[Component](../api/core/component)` decorator: ``` import { ChangeDetectionStrategy, Component } from '@angular/core'; @Component({ changeDetection: ChangeDetectionStrategy.OnPush, }) export class MyComponent {} ``` Common change detection scenarios --------------------------------- This section examines several common change detection scenarios to illustrate Angular's behavior. An event is handled by a component with default change detection ---------------------------------------------------------------- If Angular handles an event within a component without `OnPush` strategy, the framework executes change detection on the entire component tree. Angular will skip descendant component subtrees with roots using `OnPush`, which have not received new inputs. As an example, if we set the change detection strategy of `MainComponent` to `OnPush` and the user interacts with a component outside the subtree with root `MainComponent`, Angular will check all the green components from the diagram below (`AppComponent`, `HeaderComponent`, `SearchComponent`, `ButtonComponent`) unless `MainComponent` receives new inputs: An event is handled by a component with OnPush ---------------------------------------------- If Angular handles an event within a component with OnPush strategy, the framework will execute change detection within the entire component tree. Angular will ignore component subtrees with roots using OnPush, which have not received new inputs and are outside the component which handled the event. As an example, if Angular handles an event within `MainComponent`, the framework will run change detection in the entire component tree. Angular will ignore the subtree with root `LoginComponent` because it has `OnPush` and the event happened outside of its scope. An event is handled by a descendant of a component with OnPush -------------------------------------------------------------- If Angular handles an event in a component with OnPush, the framework will execute change detection in the entire component tree, including the component’s ancestors. As an example, in the diagram below, Angular handles an event in `LoginComponent` which uses OnPush. Angular will invoke change detection in the entire component subtree including `MainComponent` (`LoginComponent`’s parent), even though `MainComponent` has `OnPush` as well. Angular checks `MainComponent` as well because `LoginComponent` is part of its view. New inputs to component with OnPush ----------------------------------- Angular will run change detection within a child component with `OnPush` setting an input property as result of a template binding. For example, in the diagram below, `AppComponent` passes a new input to `MainComponent`, which has `OnPush`. Angular will run change detection in `MainComponent` but will not run change detection in `LoginComponent`, which also has `OnPush`, unless it receives new inputs as well. Edge cases ---------- * **Modifying input properties in TypeScript code**. When you use an API like `@[ViewChild](../api/core/viewchild)` or `@[ContentChild](../api/core/contentchild)` to get a reference to a component in TypeScript and manually modify an `@[Input](../api/core/input)` property, Angular will not automatically run change detection for OnPush components. If you need Angular to run change detection, you can inject `[ChangeDetectorRef](../api/core/changedetectorref)` in your component and call `changeDetectorRef.markForCheck()` to tell Angular to schedule a change detection. * **Modifying object references**. In case an input receives a mutable object as value and you modify the object but preserve the reference, Angular will not invoke change detection. That’s the expected behavior because the previous and the current value of the input point to the same reference. Last reviewed on Wed May 04 2022
programming_docs
angular Update search keywords Update search keywords ====================== You can help readers find the topics in the Angular documentation by adding keywords to a topic. Keywords help readers find topics by relating alternate terms and related concepts to a topic. In [angular.io](https://angular.io), readers search for content by using: * External search, such as by using [google.com](https://google.com) * The search box at the top of each page Each of these methods can be made more effective by adding relevant keywords to the topics. To update search keywords in a topic ------------------------------------ Perform these steps in a browser. 1. Navigate to the topic to which you want to add or update search keywords. 2. Decide what search keywords you'd like to add to the topic.Keywords should be words that relate to the topic and are not found in the topic headings. 3. Open the topic's **Edit file** page to [make a minor change](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic). 4. Add or update the `@searchKeywords` tag at the end of the topic with your keywords. The `@searchKeywords` tag takes a set of single-word keywords that are separated by spaces. The tag and the keywords must be enclosed in curly brackets. A sample tag is shown here to add these keywords to a page: *route*, *router*, *routing*, and *navigation*. ``` {@searchKeywords route router routing navigation} ``` 5. [Update or add the `@reviewed` entry](reviewing-content#update-the-last-reviewed-date) at the end of the topic's source code. 6. Propose your changes as described in [make a minor change](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic). Last reviewed on Sun Dec 11 2022 angular Creating libraries Creating libraries ================== This page provides a conceptual overview of how to create and publish new libraries to extend Angular functionality. If you find that you need to solve the same problem in more than one application (or want to share your solution with other developers), you have a candidate for a library. A simple example might be a button that sends users to your company website, that would be included in all applications that your company builds. Getting started --------------- Use the Angular CLI to generate a new library skeleton in a new workspace with the following commands. ``` ng new my-workspace --no-create-application cd my-workspace ng generate library my-lib ``` You should be very careful when choosing the name of your library if you want to publish it later in a public package registry such as npm. See [Publishing your library](creating-libraries#publishing-your-library). Avoid using a name that is prefixed with `ng-`, such as `ng-library`. The `ng-` prefix is a reserved keyword used from the Angular framework and its libraries. The `ngx-` prefix is preferred as a convention used to denote that the library is suitable for use with Angular. It is also an excellent indication to consumers of the registry to differentiate between libraries of different JavaScript frameworks. The `ng generate` command creates the `projects/my-lib` folder in your workspace, which contains a component and a service inside an NgModule. > For more details on how a library project is structured, refer to the [Library project files](file-structure#library-project-files) section of the [Project File Structure guide](file-structure). > > Use the monorepo model to use the same workspace for multiple projects. See [Setting up for a multi-project workspace](file-structure#multiple-projects). > > When you generate a new library, the workspace configuration file, `angular.json`, is updated with a project of type `library`. ``` "projects": { … "my-lib": { "root": "projects/my-lib", "sourceRoot": "projects/my-lib/src", "projectType": "library", "prefix": "lib", "architect": { "build": { "builder": "@angular-devkit/build-angular:ng-packagr", … ``` Build, test, and lint the project with CLI commands: ``` ng build my-lib --configuration development ng test my-lib ng lint my-lib ``` Notice that the configured builder for the project is different from the default builder for application projects. This builder, among other things, ensures that the library is always built with the [AOT compiler](aot-compiler). To make library code reusable you must define a public API for it. This "user layer" defines what is available to consumers of your library. A user of your library should be able to access public functionality (such as NgModules, service providers and general utility functions) through a single import path. The public API for your library is maintained in the `public-api.ts` file in your library folder. Anything exported from this file is made public when your library is imported into an application. Use an NgModule to expose services and components. Your library should supply documentation (typically a README file) for installation and maintenance. Refactoring parts of an application into a library -------------------------------------------------- To make your solution reusable, you need to adjust it so that it does not depend on application-specific code. Here are some things to consider in migrating application functionality to a library. * Declarations such as components and pipes should be designed as stateless, meaning they don't rely on or alter external variables. If you do rely on state, you need to evaluate every case and decide whether it is application state or state that the library would manage. * Any observables that the components subscribe to internally should be cleaned up and disposed of during the lifecycle of those components * Components should expose their interactions through inputs for providing context, and outputs for communicating events to other components * Check all internal dependencies. + For custom classes or interfaces used in components or service, check whether they depend on additional classes or interfaces that also need to be migrated + Similarly, if your library code depends on a service, that service needs to be migrated + If your library code or its templates depend on other libraries (such as Angular Material, for instance), you must configure your library with those dependencies * Consider how you provide services to client applications. + Services should declare their own providers, rather than declaring providers in the NgModule or a component. Declaring a provider makes that service *tree-shakable*. This practice lets the compiler leave the service out of the bundle if it never gets injected into the application that imports the library. For more about this, see [Tree-shakable providers](architecture-services#providing-services). + If you register global service providers or share providers across multiple NgModules, use the [`forRoot()` and `forChild()` design patterns](singleton-services) provided by the [RouterModule](../api/router/routermodule) + If your library provides optional services that might not be used by all client applications, support proper tree-shaking for that case by using the [lightweight token design pattern](lightweight-injection-tokens) Integrating with the CLI using code-generation schematics --------------------------------------------------------- A library typically includes *reusable code* that defines components, services, and other Angular artifacts (pipes, directives) that you import into a project. A library is packaged into an npm package for publishing and sharing. This package can also include [schematics](glossary#schematic) that provide instructions for generating or transforming code directly in your project, in the same way that the CLI creates a generic new component with `ng generate component`. A schematic that is packaged with a library can, for example, provide the Angular CLI with the information it needs to generate a component that configures and uses a particular feature, or set of features, defined in that library. One example of this is [Angular Material's navigation schematic](https://material.angular.io/guide/schematics#navigation-schematic) which configures the CDK's [BreakpointObserver](https://material.angular.io/cdk/layout/overview#breakpointobserver) and uses it with Material's [MatSideNav](https://material.angular.io/components/sidenav/overview) and [MatToolbar](https://material.angular.io/components/toolbar/overview) components. Create and include the following kinds of schematics: * Include an installation schematic so that `ng add` can add your library to a project * Include generation schematics in your library so that `ng generate` can scaffold your defined artifacts (components, services, tests) in a project * Include an update schematic so that `ng update` can update your library's dependencies and provide migrations for breaking changes in new releases What you include in your library depends on your task. For example, you could define a schematic to create a dropdown that is pre-populated with canned data to show how to add it to an application. If you want a dropdown that would contain different passed-in values each time, your library could define a schematic to create it with a given configuration. Developers could then use `ng generate` to configure an instance for their own application. Suppose you want to read a configuration file and then generate a form based on that configuration. If that form needs additional customization by the developer who is using your library, it might work best as a schematic. However, if the form will always be the same and not need much customization by developers, then you could create a dynamic component that takes the configuration and generates the form. In general, the more complex the customization, the more useful the schematic approach. For more information, see [Schematics Overview](schematics) and [Schematics for Libraries](schematics-for-libraries). Publishing your library ----------------------- Use the Angular CLI and the npm package manager to build and publish your library as an npm package. Angular CLI uses a tool called [ng-packagr](https://github.com/ng-packagr/ng-packagr/blob/master/README.md) to create packages from your compiled code that can be published to npm. See [Building libraries with Ivy](creating-libraries#ivy-libraries) for information on the distribution formats supported by `ng-packagr` and guidance on how to choose the right format for your library. You should always build libraries for distribution using the `production` configuration. This ensures that generated output uses the appropriate optimizations and the correct package format for npm. ``` ng build my-lib cd dist/my-lib npm publish ``` Managing assets in a library ---------------------------- In your Angular library, the distributable can include additional assets like theming files, Sass mixins, or documentation (like a changelog). For more information [copy assets into your library as part of the build](https://github.com/ng-packagr/ng-packagr/blob/master/docs/copy-assets.md) and [embed assets in component styles](https://github.com/ng-packagr/ng-packagr/blob/master/docs/embed-assets-css.md). > When including additional assets like Sass mixins or pre-compiled CSS. You need to add these manually to the conditional ["exports"](angular-package-format/index#exports) in the `package.json` of the primary entrypoint. > > `ng-packagr` will merge handwritten `"exports"` with the auto-generated ones, allowing for library authors to configure additional export subpaths, or custom conditions. > > > ``` > "exports": { > ".": { > "sass": "./_index.scss", > }, > "./theming": { > "sass": "./_theming.scss" > }, > "./prebuilt-themes/indigo-pink.css": { > "style": "./prebuilt-themes/indigo-pink.css" > } > } > ``` > The above is an extract from the [@angular/material](https://unpkg.com/browse/@angular/material/package.json) distributable. > > Peer dependencies ----------------- Angular libraries should list any `@angular/*` dependencies the library depends on as peer dependencies. This ensures that when modules ask for Angular, they all get the exact same module. If a library lists `@angular/core` in `dependencies` instead of `peerDependencies`, it might get a different Angular module instead, which would cause your application to break. Using your own library in applications -------------------------------------- You don't have to publish your library to the npm package manager to use it in the same workspace, but you do have to build it first. To use your own library in an application: * Build the library. You cannot use a library before it is built. ``` ng build my-lib ``` * In your applications, import from the library by name: ``` import { myExport } from 'my-lib'; ``` ### Building and rebuilding your library The build step is important if you haven't published your library as an npm package and then installed the package back into your application from npm. For instance, if you clone your git repository and run `npm install`, your editor shows the `my-lib` imports as missing if you haven't yet built your library. > When you import something from a library in an Angular application, Angular looks for a mapping between the library name and a location on disk. When you install a library package, the mapping is in the `node_modules` folder. When you build your own library, it has to find the mapping in your `tsconfig` paths. > > Generating a library with the Angular CLI automatically adds its path to the `tsconfig` file. The Angular CLI uses the `tsconfig` paths to tell the build system where to find the library. > > For more information, see [Path mapping overview](https://www.typescriptlang.org/docs/handbook/module-resolution.html#path-mapping). > > If you find that changes to your library are not reflected in your application, your application is probably using an old build of the library. You can rebuild your library whenever you make changes to it, but this extra step takes time. *Incremental builds* functionality improves the library-development experience. Every time a file is changed a partial build is performed that emits the amended files. Incremental builds can be run as a background process in your development environment. To take advantage of this feature add the `--watch` flag to the build command: ``` ng build my-lib --watch ``` > The CLI `build` command uses a different builder and invokes a different build tool for libraries than it does for applications. > > * The build system for applications, `@angular-devkit/build-angular`, is based on `webpack`, and is included in all new Angular CLI projects > * The build system for libraries is based on `ng-packagr`. It is only added to your dependencies when you add a library using `ng generate library my-lib`. > > The two build systems support different things, and even where they support the same things, they do those things differently. This means that the TypeScript source can result in different JavaScript code in a built library than it would in a built application. > > For this reason, an application that depends on a library should only use TypeScript path mappings that point to the *built library*. TypeScript path mappings should *not* point to the library source `.ts` files. > > Publishing libraries -------------------- There are two distribution formats to use when publishing a library: | Distribution formats | Details | | --- | --- | | Partial-Ivy (recommended) | Contains portable code that can be consumed by Ivy applications built with any version of Angular from v12 onwards. | | Full-Ivy | Contains private Angular Ivy instructions, which are not guaranteed to work across different versions of Angular. This format requires that the library and application are built with the *exact* same version of Angular. This format is useful for environments where all library and application code is built directly from source. | For publishing to npm use the partial-Ivy format as it is stable between patch versions of Angular. Avoid compiling libraries with full-Ivy code if you are publishing to npm because the generated Ivy instructions are not part of Angular's public API, and so might change between patch versions. Ensuring library version compatibility -------------------------------------- The Angular version used to build an application should always be the same or greater than the Angular versions used to build any of its dependent libraries. For example, if you had a library using Angular version 13, the application that depends on that library should use Angular version 13 or later. Angular does not support using an earlier version for the application. If you intend to publish your library to npm, compile with partial-Ivy code by setting `"compilationMode": "partial"` in `tsconfig.prod.json`. This partial format is stable between different versions of Angular, so is safe to publish to npm. Code with this format is processed during the application build using the same version of the Angular compiler, ensuring that the application and all of its libraries use a single version of Angular. Avoid compiling libraries with full-Ivy code if you are publishing to npm because the generated Ivy instructions are not part of Angular's public API, and so might change between patch versions. If you've never published a package in npm before, you must create a user account. Read more in [Publishing npm Packages](https://docs.npmjs.com/getting-started/publishing-npm-packages). Consuming partial-Ivy code outside the Angular CLI -------------------------------------------------- An application installs many Angular libraries from npm into its `node_modules` directory. However, the code in these libraries cannot be bundled directly along with the built application as it is not fully compiled. To finish compilation, use the Angular linker. For applications that don't use the Angular CLI, the linker is available as a [Babel](https://babeljs.io) plugin. The plugin is to be imported from `@angular/compiler-cli/linker/babel`. The Angular linker Babel plugin supports build caching, meaning that libraries only need to be processed by the linker a single time, regardless of other npm operations. Example of integrating the plugin into a custom [Webpack](https://webpack.js.org) build by registering the linker as a [Babel](https://babeljs.io) plugin using [babel-loader](https://webpack.js.org/loaders/babel-loader/#options). ``` import linkerPlugin from '@angular/compiler-cli/linker/babel'; export default { // ... module: { rules: [ { test: /\.m?js$/, use: { loader: 'babel-loader', options: { plugins: [linkerPlugin], compact: false, cacheDirectory: true, } } } ] } // ... } ``` > The Angular CLI integrates the linker plugin automatically, so if consumers of your library are using the CLI, they can install Ivy-native libraries from npm without any additional configuration. > > Last reviewed on Mon Feb 28 2022 angular DevTools Overview DevTools Overview ================= Angular DevTools is a browser extension that provides debugging and profiling capabilities for Angular applications. Angular DevTools supports Angular v12 and later. You can find Angular DevTools in the [Chrome Web Store](https://chrome.google.com/webstore/detail/angular-developer-tools/ienfalfjdbdpebioblfackkekamfmbnh) and in [Firefox Addons](https://addons.mozilla.org/en-GB/firefox/addon/angular-devtools/). After installing Angular DevTools, find the extension under the Angular tab in your browser DevTools. When you open the extension, you'll see two additional tabs: | Tabs | Details | | --- | --- | | [Components](devtools#components) | Lets you explore the components and directives in your application and preview or edit their state. | | [Profiler](devtools#profiler) | Lets you profile your application and understand what the performance bottleneck is during change detection execution. | In the top-right corner of Angular DevTools you'll find which version of Angular is running on the page as well as the latest commit hash for the extension. Bug reports ----------- Report issues and feature requests on [GitHub](https://github.com/angular/angular/issues). To report an issue with the Profiler, export the Profiler recording by clicking the **Save Profile** button, and then attaching that export as a file in the issue. > Make sure that the Profiler recording does not contain any confidential information. > > Debug your application ---------------------- The **Components** tab lets you explore the structure of your application. You can visualize and inspect the component and directive instances and preview or modify their state. In the next couple of sections we'll look into how to use this tab effectively to debug your application. ### Explore the application structure In the preceding screenshot, you can see the component tree of an application. The component tree displays a hierarchical relationship of the *components and directives* within your application. When you select a component or a directive instance, Angular DevTools presents additional information about that instance. ### View properties Click the individual components or directives in the component explorer to select them and preview their properties. Angular DevTools displays their properties and metadata on the right-hand side of the component tree. Navigate in the component tree using the mouse or the following keyboard shortcuts: | Keyboard shortcut | Details | | --- | --- | | Up and down arrows | Select the previous and next nodes | | Left and right arrows | Collapse and expand a node | To look up a component or directive by name use the search box above the component tree. To navigate to the next search match, press `Enter`. To navigate to the previous search match, press `Shift + Enter`. ### Navigate to the host node To go to the host element of a particular component or directive, find it in the component explorer and double-click it. Browsers' DevTools opens the Elements tab in Chrome or the Inspector one in Firefox, and selects the associated DOM node. ### Navigate to source For components, Angular DevTools also lets you navigate to the component definition in the source tab. After you select a particular component, click the icon at the top-right of the properties view: ### Update property value Like browsers' DevTools, the properties view lets you edit the value of an input, output, or another property. Right-click on the property value. If edit functionality is available for this value type, you'll see a text input. Type the new value and press `Enter`. ### Access selected component or directive in console As a shortcut in the console, Angular DevTools provides you access to instances of the recently selected components or directives. Type `$ng0` to get a reference to the instance of the currently selected component or directive, and type `$ng1` for the previously selected instance. ### Select a directive or component Similar to browsers' DevTools, you can inspect the page to select a particular component or directive. Click the ***Inspect element*** icon in the top left corner within Angular DevTools and hover over a DOM element on the page. The extension recognizes the associated directives and/or components and lets you select the corresponding element in the Component tree. ![selecting dom node](https://angular.io/generated/images/guide/devtools/inspect-element.png) Profile your application ------------------------ The **Profiler** tab lets you preview the execution of Angular's change detection. The Profiler lets you start profiling or import an existing profile. To start profiling your application, hover over the circle in the top-left corner within the **Profiler** tab and click **Start recording**. During profiling, Angular DevTools captures execution events, such as change detection and lifecycle hook execution. To finish recording, click the circle again to **Stop recording**. You can also import an existing recording. Read more about this feature in the [Import recording](devtools#) section. ### Understand your application's execution In the following screenshot, find the default view of the Profiler after you complete recording. Near the top of the view you can see a sequence of bars, each one of them symbolizing change detection cycles in your app. The taller a bar is, the longer your application has spent in this cycle. When you select a bar, DevTools renders a bar chart with all the components and directives that it captured during this cycle. Earlier on the change detection timeline, you can find how much time Angular spent in this cycle. Angular DevTools attempts to estimate the frame drop at this point to indicate when the execution of your application might impact the user experience. Angular DevTools also indicates what triggered the change detection (that is, the change detection's source). ### Understand component execution When you click on a bar, you'll find a detailed view about how much time your application spent in the particular directive or component: Figure shows the total time spent by NgforOf directive and which method was called in it. It also shows the parent hierarchy of the directive selected. ### Hierarchical views You can also preview the change detection execution in a flame graph-like view. Each tile in the graph represents an element on the screen at a specific position in the render tree. For example, if during one change detection cycle at a specific position in the component tree you had `ComponentA`, this component was removed and in its place Angular rendered `ComponentB`, you'll see both components at the same tile. Each tile is colored depending on how much time Angular spent there. DevTools determines the intensity of the color by the time spent relative to the tile where we've spent the most time in change detection. When you click on a certain tile, you'll see details about it in the panel on the right. Double-clicking the tile zooms it in so you can preview the nested children. ### Debug OnPush To preview the components in which Angular did change detection, select the **Change detection** checkbox at the top, above the flame graph. This view colors all the tiles in which Angular performed change detection in green, and the rest in gray: ### Import recording Click the **Save Profile** button at the top-left of a recorded profiling session to export it as a JSON file and save it to the disk. Then, import the file in the initial view of the profiler by clicking the **Choose file** input: Last reviewed on Mon Feb 28 2022
programming_docs
angular Observables in Angular Observables in Angular ====================== Angular makes use of observables as an interface to handle a variety of common asynchronous operations. For example: * The HTTP module uses observables to handle AJAX requests and responses * The Router and Forms modules use observables to listen for and respond to user-input events Transmitting data between components ------------------------------------ Angular provides an `[EventEmitter](../api/core/eventemitter)` class that is used when publishing values from a component through the [`@Output()` decorator](inputs-outputs#output). `[EventEmitter](../api/core/eventemitter)` extends [RxJS `Subject`](https://rxjs.dev/api/index/class/Subject), adding an `emit()` method so it can send arbitrary values. When you call `emit()`, it passes the emitted value to the `next()` method of any subscribed observer. A good example of usage can be found in the [EventEmitter](../api/core/eventemitter) documentation. Here is the example component that listens for open and close events: ``` <app-zippy (open)="onOpen($event)" (close)="onClose($event)"></app-zippy> ``` Here is the component definition: ``` @Component({ selector: 'app-zippy', template: ` <div class="zippy"> <button type="button" (click)="toggle()">Toggle</button> <div [hidden]="!visible"> <ng-content></ng-content> </div> </div> `, }) export class ZippyComponent { visible = true; @Output() open = new EventEmitter<any>(); @Output() close = new EventEmitter<any>(); toggle() { this.visible = !this.visible; if (this.visible) { this.open.emit(null); } else { this.close.emit(null); } } } ``` HTTP ---- Angular's `[HttpClient](../api/common/http/httpclient)` returns observables from HTTP method calls. For instance, `http.get('/api')` returns an observable. This provides several advantages over promise-based HTTP APIs: * Observables do not mutate the server response (as can occur through chained `.then()` calls on promises). Instead, you can use a series of operators to transform values as needed. * HTTP requests are cancellable through the `unsubscribe()` method * Requests can be configured to get progress event updates * Failed requests can be retried easily Async pipe ---------- The [AsyncPipe](../api/common/asyncpipe) subscribes to an observable or promise and returns the latest value it has emitted. When a new value is emitted, the pipe marks the component to be checked for changes. The following example binds the `time` observable to the component's view. The observable continuously updates the view with the current time. ``` @Component({ selector: 'async-observable-pipe', template: `<div><code>observable|async</code>: Time: {{ time | async }}</div>` }) export class AsyncObservablePipeComponent { time = new Observable<string>(observer => { setInterval(() => observer.next(new Date().toString()), 1000); }); } ``` Router ------ [`Router.events`](../api/router/router#events) provides events as observables. You can use the `filter()` operator from RxJS to look for events of interest, and subscribe to them in order to make decisions based on the sequence of events in the navigation process. Here's an example: ``` import { Router, NavigationStart } from '@angular/router'; import { filter } from 'rxjs/operators'; @Component({ selector: 'app-routable', template: 'Routable1Component template' }) export class Routable1Component implements OnInit { navStart: Observable<NavigationStart>; constructor(router: Router) { // Create a new Observable that publishes only the NavigationStart event this.navStart = router.events.pipe( filter(evt => evt instanceof NavigationStart) ) as Observable<NavigationStart>; } ngOnInit() { this.navStart.subscribe(() => console.log('Navigation Started!')); } } ``` The [ActivatedRoute](../api/router/activatedroute) is an injected router service that makes use of observables to get information about a route path and parameters. For example, `[ActivatedRoute.url](../api/router/activatedroute#url)` contains an observable that reports the route path or paths. Here's an example: ``` import { ActivatedRoute } from '@angular/router'; @Component({ selector: 'app-routable', template: 'Routable2Component template' }) export class Routable2Component implements OnInit { constructor(private activatedRoute: ActivatedRoute) {} ngOnInit() { this.activatedRoute.url .subscribe(url => console.log('The URL changed to: ' + url)); } } ``` Reactive forms -------------- Reactive forms have properties that use observables to monitor form control values. The [`FormControl`](../api/forms/formcontrol) properties `valueChanges` and `statusChanges` contain observables that raise change events. Subscribing to an observable form-control property is a way of triggering application logic within the component class. For example: ``` import { FormGroup } from '@angular/forms'; @Component({ selector: 'my-component', template: 'MyComponent Template' }) export class MyComponent implements OnInit { nameChangeLog: string[] = []; heroForm!: FormGroup; ngOnInit() { this.logNameChange(); } logNameChange() { const nameControl = this.heroForm.get('name'); nameControl?.valueChanges.forEach( (value: string) => this.nameChangeLog.push(value) ); } } ``` Last reviewed on Mon Feb 28 2022 angular Route transition animations Route transition animations =========================== Routing enables users to navigate between different routes in an application. Prerequisites ------------- A basic understanding of the following concepts: * [Introduction to Angular animations](animations) * [Transition and triggers](transition-and-triggers) * [Reusable animations](reusable-animations) Enable routing transition animation ----------------------------------- When a user navigates from one route to another, the Angular router maps the URL path to a relevant component and displays its view. Animating this route transition can greatly enhance the user experience. The Angular router comes with high-level animation functions that let you animate the transitions between views when a route changes. To produce an animation sequence when switching between routes, you need to define nested animation sequences. Start with the top-level component that hosts the view, and nest animations in the components that host the embedded views. To enable routing transition animation, do the following: 1. Import the routing module into the application and create a routing configuration that defines the possible routes. 2. Add a router outlet to tell the Angular router where to place the activated components in the DOM. 3. Define the animation. Illustrate a router transition animation by navigating between two routes, *Home* and *About* associated with the `HomeComponent` and `AboutComponent` views respectively. Both of these component views are children of the top-most view, hosted by `AppComponent`. Implement a router transition animation that slides in the new view to the right and slides out the old view when navigating between the two routes. Route configuration ------------------- To begin, configure a set of routes using methods available in the `[RouterModule](../api/router/routermodule)` class. This route configuration tells the router how to navigate. Use the `RouterModule.forRoot` method to define a set of routes. Also, add `[RouterModule](../api/router/routermodule)` to the `imports` array of the main module, `AppModule`. > **NOTE**: Use the `RouterModule.forRoot` method in the root module, `AppModule`, to register top-level application routes and providers. For feature modules, call the `RouterModule.forChild` method instead. > > The following configuration defines the possible routes for the application. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { RouterModule } from '@angular/router'; import { AppComponent } from './app.component'; import { OpenCloseComponent } from './open-close.component'; import { OpenClosePageComponent } from './open-close-page.component'; import { OpenCloseChildComponent } from './open-close.component.4'; import { ToggleAnimationsPageComponent } from './toggle-animations-page.component'; import { StatusSliderComponent } from './status-slider.component'; import { StatusSliderPageComponent } from './status-slider-page.component'; import { HeroListPageComponent } from './hero-list-page.component'; import { HeroListGroupPageComponent } from './hero-list-group-page.component'; import { HeroListGroupsComponent } from './hero-list-groups.component'; import { HeroListEnterLeavePageComponent } from './hero-list-enter-leave-page.component'; import { HeroListEnterLeaveComponent } from './hero-list-enter-leave.component'; import { HeroListAutoCalcPageComponent } from './hero-list-auto-page.component'; import { HeroListAutoComponent } from './hero-list-auto.component'; import { HomeComponent } from './home.component'; import { AboutComponent } from './about.component'; import { InsertRemoveComponent } from './insert-remove.component'; import { QueryingComponent } from './querying.component'; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, RouterModule.forRoot([ { path: '', pathMatch: 'full', redirectTo: '/enter-leave' }, { path: 'open-close', component: OpenClosePageComponent, data: { animation: 'openClosePage' } }, { path: 'status', component: StatusSliderPageComponent, data: { animation: 'statusPage' } }, { path: 'toggle', component: ToggleAnimationsPageComponent, data: { animation: 'togglePage' } }, { path: 'heroes', component: HeroListPageComponent, data: { animation: 'filterPage' } }, { path: 'hero-groups', component: HeroListGroupPageComponent, data: { animation: 'heroGroupPage' } }, { path: 'enter-leave', component: HeroListEnterLeavePageComponent, data: { animation: 'enterLeavePage' } }, { path: 'auto', component: HeroListAutoCalcPageComponent, data: { animation: 'autoPage' } }, { path: 'insert-remove', component: InsertRemoveComponent, data: { animation: 'insertRemovePage' } }, { path: 'querying', component: QueryingComponent, data: { animation: 'queryingPage' } }, { path: 'home', component: HomeComponent, data: { animation: 'HomePage' } }, { path: 'about', component: AboutComponent, data: { animation: 'AboutPage' } }, ]) ], ``` The `home` and `about` paths are associated with the `HomeComponent` and `AboutComponent` views. The route configuration tells the Angular router to instantiate the `HomeComponent` and `AboutComponent` views when the navigation matches the corresponding path. The `data` property of each route defines the key animation-specific configuration associated with a route. The `data` property value is passed into `AppComponent` when the route changes. > **NOTE**: The `data` property names that you use can be arbitrary. For example, the name *animation* used in the preceding example is an arbitrary choice. > > Router outlet ------------- After configuring the routes, add a `<[router-outlet](../api/router/routeroutlet)>` inside the root `AppComponent` template. The `<[router-outlet](../api/router/routeroutlet)>` directive tells the Angular router where to render the views when matched with a route. The `[ChildrenOutletContexts](../api/router/childrenoutletcontexts)` holds information about outlets and activated routes. The `data` property of each `[Route](../api/router/route)` can be used to animate routing transitions. ``` <div [@routeAnimations]="getRouteAnimationData()"> <router-outlet></router-outlet> </div> ``` `AppComponent` defines a method that can detect when a view changes. The method assigns an animation state value to the animation trigger (`@routeAnimation`) based on the route configuration `data` property value. Here's an example of an `AppComponent` method that detects when a route change happens. ``` constructor(private contexts: ChildrenOutletContexts) {} getRouteAnimationData() { return this.contexts.getContext('primary')?.route?.snapshot?.data?.['animation']; } ``` The `getRouteAnimationData()` method takes the value of the outlet. It returns a string that represents the state of the animation based on the custom data of the current active route. Use this data to control which transition to run for each route. Animation definition -------------------- Animations can be defined directly inside your components. For this example you are defining the animations in a separate file, which allows re-use of animations. The following code snippet defines a reusable animation named `slideInAnimation`. ``` export const slideInAnimation = trigger('routeAnimations', [ transition('HomePage <=> AboutPage', [ style({ position: 'relative' }), query(':enter, :leave', [ style({ position: 'absolute', top: 0, left: 0, width: '100%' }) ]), query(':enter', [ style({ left: '-100%' }) ]), query(':leave', animateChild()), group([ query(':leave', [ animate('300ms ease-out', style({ left: '100%' })) ]), query(':enter', [ animate('300ms ease-out', style({ left: '0%' })) ]), ]), ]), transition('* <=> *', [ style({ position: 'relative' }), query(':enter, :leave', [ style({ position: 'absolute', top: 0, left: 0, width: '100%' }) ]), query(':enter', [ style({ left: '-100%' }) ]), query(':leave', animateChild()), group([ query(':leave', [ animate('200ms ease-out', style({ left: '100%', opacity: 0 })) ]), query(':enter', [ animate('300ms ease-out', style({ left: '0%' })) ]), query('@*', animateChild()) ]), ]) ]); ``` The animation definition performs the following tasks: * Defines two transitions (a single `[trigger](../api/animations/trigger)` can define multiple states and transitions) * Adjusts the styles of the host and child views to control their relative positions during the transition * Uses `[query](../api/animations/query)()` to determine which child view is entering and which is leaving the host view A route change activates the animation trigger, and a transition matching the state change is applied. > **NOTE**: The transition states must match the `data` property value defined in the route configuration. > > Make the animation definition available in your application by adding the reusable animation (`slideInAnimation`) to the `animations` metadata of the `AppComponent`. ``` @Component({ selector: 'app-root', templateUrl: 'app.component.html', styleUrls: ['app.component.css'], animations: [ slideInAnimation ] }) ``` ### Style the host and child components During a transition, a new view is inserted directly after the old one and both elements appear on screen at the same time. To prevent this behavior, update the host view to use relative positioning. Then, update the removed and inserted child views to use absolute positioning. Adding these styles to the views animates the containers in place and prevents one view from affecting the position of the other on the page. ``` trigger('routeAnimations', [ transition('HomePage <=> AboutPage', [ style({ position: 'relative' }), query(':enter, :leave', [ style({ position: 'absolute', top: 0, left: 0, width: '100%' }) ]), ``` ### Query the view containers Use the `[query](../api/animations/query)()` method to find and animate elements within the current host component. The `[query](../api/animations/query)(":enter")` statement returns the view that is being inserted, and `[query](../api/animations/query)(":leave")` returns the view that is being removed. Assume that you are routing from the *Home => About*. ``` query(':enter', [ style({ left: '-100%' }) ]), query(':leave', animateChild()), group([ query(':leave', [ animate('300ms ease-out', style({ left: '100%' })) ]), query(':enter', [ animate('300ms ease-out', style({ left: '0%' })) ]), ]), ]), transition('* <=> *', [ style({ position: 'relative' }), query(':enter, :leave', [ style({ position: 'absolute', top: 0, left: 0, width: '100%' }) ]), query(':enter', [ style({ left: '-100%' }) ]), query(':leave', animateChild()), group([ query(':leave', [ animate('200ms ease-out', style({ left: '100%', opacity: 0 })) ]), query(':enter', [ animate('300ms ease-out', style({ left: '0%' })) ]), query('@*', animateChild()) ]), ]) ``` The animation code does the following after styling the views: 1. `[query](../api/animations/query)(':enter', [style](../api/animations/style)({ left: '-100%' }))` matches the view that is added and hides the newly added view by positioning it to the far left. 2. Calls `[animateChild](../api/animations/animatechild)()` on the view that is leaving, to run its child animations. 3. Uses [`group()`](../api/animations/group) function to make the inner animations run in parallel. 4. Within the [`group()`](../api/animations/group) function: 1. Queries the view that is removed and animates it to slide far to the right. 2. Slides in the new view by animating the view with an easing function and duration. This animation results in the `about` view sliding in from the left. 5. Calls the `[animateChild](../api/animations/animatechild)()` method on the new view to run its child animations after the main animation completes. You now have a basic routable animation that animates routing from one view to another. More on Angular animations -------------------------- You might also be interested in the following: * [Introduction to Angular animations](animations) * [Transition and triggers](transition-and-triggers) * [Complex animation sequences](complex-animation-sequences) * [Reusable animations](reusable-animations) Last reviewed on Tue Oct 11 2022 angular Overview of Angular libraries Overview of Angular libraries ============================= Many applications need to solve the same general problems, such as presenting a unified user interface, presenting data, and allowing data entry. Developers can create general solutions for particular domains that can be adapted for re-use in different applications. Such a solution can be built as Angular *libraries* and these libraries can be published and shared as *npm packages*. An Angular library is an Angular [project](glossary#project) that differs from an application in that it cannot run on its own. A library must be imported and used in an application. Libraries extend Angular's base features. For example, to add [reactive forms](reactive-forms) to an application, add the library package using `ng add @angular/forms`, then import the `[ReactiveFormsModule](../api/forms/reactiveformsmodule)` from the `@angular/forms` library in your application code. Similarly, adding the [service worker](service-worker-intro) library to an Angular application is one of the steps for turning an application into a [Progressive Web App](https://developers.google.com/web/progressive-web-apps) (PWA). [Angular Material](https://material.angular.io) is an example of a large, general-purpose library that provides sophisticated, reusable, and adaptable UI components. Any application developer can use these and other libraries that have been published as npm packages by the Angular team or by third parties. See [Using Published Libraries](using-libraries). Creating libraries ------------------ If you have developed features that are suitable for reuse, you can create your own libraries. These libraries can be used locally in your workspace, or you can publish them as [npm packages](npm-packages) to share with other projects or other Angular developers. These packages can be published to the npm registry, a private npm Enterprise registry, or a private package management system that supports npm packages. See [Creating Libraries](creating-libraries). Deciding to package features as a library is an architectural decision. It is comparable to deciding whether a feature is a component or a service, or deciding on the scope of a component. Packaging features as a library forces the artifacts in the library to be decoupled from the application's business logic. This can help to avoid various bad practices or architecture mistakes that can make it difficult to decouple and reuse code in the future. Putting code into a separate library is more complex than simply putting everything in one application. It requires more of an investment in time and thought for managing, maintaining, and updating the library. This complexity can pay off when the library is being used in multiple applications. > **NOTE**: Libraries are intended to be used by Angular applications. To add Angular features to non-Angular web applications, use [Angular custom elements](elements). > > Last reviewed on Mon Feb 28 2022
programming_docs
angular Angular compiler options Angular compiler options ======================== When you use [ahead-of-time compilation (AOT)](aot-compiler), you can control how your application is compiled by specifying *template* compiler options in the [TypeScript configuration file](typescript-configuration). The template options object, `angularCompilerOptions`, is a sibling to the `compilerOptions` object that supplies standard options to the TypeScript compiler. ``` { "compileOnSave": false, "compilerOptions": { "baseUrl": "./", // ... }, "angularCompilerOptions": { "enableI18nLegacyMessageIdFormat": false, "strictInjectionParameters": true, // ... } } ``` Configuration inheritance with extends -------------------------------------- Like the TypeScript compiler, the Angular AOT compiler also supports `extends` in the `angularCompilerOptions` section of the TypeScript configuration file. The `extends` property is at the top level, parallel to `compilerOptions` and `angularCompilerOptions`. A TypeScript configuration can inherit settings from another file using the `extends` property. The configuration options from the base file are loaded first, then overridden by those in the inheriting configuration file. For example: ``` { "extends": "./tsconfig.json", "compilerOptions": { "outDir": "./out-tsc/app", // ... "angularCompilerOptions": { "strictTemplates": true, "preserveWhitespaces": true, // ... } } ``` For more information, see the [TypeScript Handbook](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html). Template options ---------------- The following options are available for configuring the AOT template compiler. ### `allowEmptyCodegenFiles` When `true`, create all possible files even if they are empty. Default is `false`. Used by the Bazel build rules to simplify how Bazel rules track file dependencies. Do not use this option outside of the Bazel rules. ### `annotationsAs` Modifies how Angular-specific annotations are emitted to improve tree-shaking. Non-Angular annotations are not affected. One of `[static](../api/upgrade/static) fields` or `decorators`. The default value is `[static](../api/upgrade/static) fields`. * By default, the compiler replaces decorators with a static field in the class, which allows advanced tree-shakers like [Closure compiler](https://github.com/google/closure-compiler) to remove unused classes * The `decorators` value leaves the decorators in place, which makes compilation faster. TypeScript emits calls to the `__decorate` helper. Use `--emitDecoratorMetadata` for runtime reflection. > **NOTE**: That the resulting code cannot tree-shake properly. > > ### `annotateForClosureCompiler` When `true`, use [Tsickle](https://github.com/angular/tsickle) to annotate the emitted JavaScript with [JSDoc](https://jsdoc.app) comments needed by the [Closure Compiler](https://github.com/google/closure-compiler). Default is `false`. ### `compilationMode` Specifies the compilation mode to use. The following modes are available: | Modes | Details | | --- | --- | | `'full'` | Generates fully AOT-compiled code according to the version of Angular that is currently being used. | | `'partial'` | Generates code in a stable, but intermediate form suitable for a published library. | The default value is `'full'`. ### `disableExpressionLowering` When `true`, the default, transforms code that is or could be used in an annotation, to allow it to be imported from template factory modules. See [metadata rewriting](aot-compiler#metadata-rewriting) for more information. When `false`, disables this rewriting, requiring the rewriting to be done manually. ### `disableTypeScriptVersionCheck` When `true`, the compiler does not look at the TypeScript version and does not report an error when an unsupported version of TypeScript is used. Not recommended, as unsupported versions of TypeScript might have undefined behavior. Default is `false`. ### `enableI18nLegacyMessageIdFormat` Instructs the Angular template compiler to create legacy ids for messages that are tagged in templates by the `i18n` attribute. See [Mark text for translations](i18n-common-prepare#mark-text-in-component-template "Mark text in component template - Prepare component for translation | Angular") for more information about marking messages for localization. Set this option to `false` unless your project relies upon translations that were created earlier using legacy IDs. Default is `true`. The pre-Ivy message extraction tooling created a variety of legacy formats for extracted message IDs. These message formats have some issues, such as whitespace handling and reliance upon information inside the original HTML of a template. The new message format is more resilient to whitespace changes, is the same across all translation file formats, and can be created directly from calls to `[$localize](../api/localize/init/%24localize)`. This allows `[$localize](../api/localize/init/%24localize)` messages in application code to use the same ID as identical `i18n` messages in component templates. ### `enableResourceInlining` When `true`, replaces the `templateUrl` and `styleUrls` properties in all `@[Component](../api/core/component)` decorators with inline content in the `template` and `styles` properties. When enabled, the `.js` output of `ngc` does not include any lazy-loaded template or style URLs. For library projects created with the Angular CLI, the development configuration default is `true`. ### `enableLegacyTemplate` When `true`, enables the deprecated `<template>` element in place of `[<ng-template>](../api/core/ng-template)`. Default is `false`. Might be required by some third-party Angular libraries. ### `flatModuleId` The module ID to use for importing a flat module (when `flatModuleOutFile` is `true`). References created by the template compiler use this module name when importing symbols from the flat module. Ignored if `flatModuleOutFile` is `false`. ### `flatModuleOutFile` When `true`, generates a flat module index of the given filename and the corresponding flat module metadata. Use to create flat modules that are packaged similarly to `@angular/core` and `@angular/common`. When this option is used, the `package.json` for the library should refer to the created flat module index instead of the library index file. Produces only one `.metadata.json` file, which contains all the metadata necessary for symbols exported from the library index. In the created `.ngfactory.js` files, the flat module index is used to import symbols. Symbols that include both the public API from the library index as well as shrouded internal symbols. By default the `.ts` file supplied in the `files` field is assumed to be the library index. If more than one `.ts` file is specified, `libraryIndex` is used to select the file to use. If more than one `.ts` file is supplied without a `libraryIndex`, an error is produced. A flat module index `.d.ts` and `.js` is created with the given `flatModuleOutFile` name in the same location as the library index `.d.ts` file. For example, if a library uses the `public_api.ts` file as the library index of the module, the `tsconfig.json` `files` field would be `["public_api.ts"]`. The `flatModuleOutFile` option could then be set, for example, to `"index.js"`, which produces `index.d.ts` and `index.metadata.json` files. The `module` field of the library's `package.json` would be `"index.js"` and the `typings` field would be `"index.d.ts"`. ### `fullTemplateTypeCheck` When `true`, the recommended value, enables the [binding expression validation](aot-compiler#binding-expression-validation) phase of the template compiler. This phase uses TypeScript to verify binding expressions. For more information, see [Template type checking](template-typecheck). Default is `false`, but when you use the Angular CLI command `ng new --strict`, it is set to `true` in the new project's configuration. > The `fullTemplateTypeCheck` option has been deprecated in Angular 13 in favor of the `strictTemplates` family of compiler options. > > ### `generateCodeForLibraries` When `true`, creates factory files (`.ngfactory.js` and `.ngstyle.js`) for `.d.ts` files with a corresponding `.metadata.json` file. The default value is `true`. When `false`, factory files are created only for `.ts` files. Do this when using factory summaries. ### `preserveWhitespaces` When `false`, the default, removes blank text nodes from compiled templates, which results in smaller emitted template factory modules. Set to `true` to preserve blank text nodes. ### `skipMetadataEmit` When `true`, does not produce `.metadata.json` files. Default is `false`. The `.metadata.json` files contain information needed by the template compiler from a `.ts` file that is not included in the `.d.ts` file produced by the TypeScript compiler. This information includes, for example, the content of annotations, such as a component's template, which TypeScript emits to the `.js` file but not to the `.d.ts` file. You can set to `true` when using factory summaries, because the factory summaries include a copy of the information that is in the `.metadata.json` file. Set to `true` if you are using TypeScript's `--outFile` option, because the metadata files are not valid for this style of TypeScript output. The Angular community does not recommend using `--outFile` with Angular. Use a bundler, such as [webpack](https://webpack.js.org), instead. ### `skipTemplateCodegen` When `true`, does not emit `.ngfactory.js` and `.ngstyle.js` files. This turns off most of the template compiler and disables the reporting of template diagnostics. Can be used to instruct the template compiler to produce `.metadata.json` files for distribution with an `npm` package. This avoids the production of `.ngfactory.js` and `.ngstyle.js` files that cannot be distributed to `npm`. For library projects created with the Angular CLI, the development configuration default is `true`. ### `strictMetadataEmit` When `true`, reports an error to the `.metadata.json` file if `"skipMetadataEmit"` is `false`. Default is `false`. Use only when `"skipMetadataEmit"` is `false` and `"skipTemplateCodegen"` is `true`. This option is intended to verify the `.metadata.json` files emitted for bundling with an `npm` package. The validation is strict and can emit errors for metadata that would never produce an error when used by the template compiler. You can choose to suppress the error emitted by this option for an exported symbol by including `@dynamic` in the comment documenting the symbol. It is valid for `.metadata.json` files to contain errors. The template compiler reports these errors if the metadata is used to determine the contents of an annotation. The metadata collector cannot predict the symbols that are designed for use in an annotation. It preemptively includes error nodes in the metadata for the exported symbols. The template compiler can then use the error nodes to report an error if these symbols are used. If the client of a library intends to use a symbol in an annotation, the template compiler does not normally report this. It gets reported after the client actually uses the symbol. This option allows detection of these errors during the build phase of the library and is used, for example, in producing Angular libraries themselves. For library projects created with the Angular CLI, the development configuration default is `true`. ### `strictInjectionParameters` When `true`, reports an error for a supplied parameter whose injection type cannot be determined. When `false`, constructor parameters of classes marked with `@[Injectable](../api/core/injectable)` whose type cannot be resolved produce a warning. The recommended value is `true`, but the default value is `false`. When you use the Angular CLI command `ng new --strict`, it is set to `true` in the created project's configuration. ### `strictTemplates` When `true`, enables [strict template type checking](template-typecheck#strict-mode). The strictness flags that this option enables allow you to turn on and off specific types of strict template type checking. See [troubleshooting template errors](template-typecheck#troubleshooting-template-errors). When you use the Angular CLI command `ng new --strict`, it is set to `true` in the new project's configuration. ### `trace` When `true`, prints extra information while compiling templates. Default is `false`. Command line options -------------------- Most of the time you interact with the Angular Compiler indirectly using Angular CLI. When debugging certain issues, you might find it useful to invoke the Angular Compiler directly. You can use the `ngc` command provided by the `@angular/compiler-cli` npm package to call the compiler from the command line. The `ngc` command is just a wrapper around TypeScript's `tsc` compiler command and is primarily configured via the `tsconfig.json` configuration options documented in [the previous sections](angular-compiler-options#angular-compiler-options). Besides the configuration file, you can also use [`tsc` command line options](https://www.typescriptlang.org/docs/handbook/compiler-options.html) to configure `ngc`. Last reviewed on Mon Feb 28 2022 angular Built-in directives Built-in directives =================== Directives are classes that add additional behavior to elements in your Angular applications. Use Angular's built-in directives to manage forms, lists, styles, and what users see. > See the live example for a working example containing the code snippets in this guide. > > The different types of Angular directives are as follows: | Directive Types | Details | | --- | --- | | [Components](component-overview) | Used with a template. This type of directive is the most common directive type. | | [Attribute directives](built-in-directives#built-in-attribute-directives) | Change the appearance or behavior of an element, component, or another directive. | | [Structural directives](built-in-directives#built-in-structural-directives) | Change the DOM layout by adding and removing DOM elements. | This guide covers built-in [attribute directives](built-in-directives#built-in-attribute-directives) and [structural directives](built-in-directives#built-in-structural-directives). Built-in attribute directives ----------------------------- Attribute directives listen to and modify the behavior of other HTML elements, attributes, properties, and components. Many NgModules such as the [`RouterModule`](router "Routing and Navigation") and the [`FormsModule`](forms "Forms") define their own attribute directives. The most common attribute directives are as follows: | Common directives | Details | | --- | --- | | [`NgClass`](built-in-directives#ngClass) | Adds and removes a set of CSS classes. | | [`NgStyle`](built-in-directives#ngstyle) | Adds and removes a set of HTML styles. | | [`NgModel`](built-in-directives#ngModel) | Adds two-way data binding to an HTML form element. | > Built-in directives use only public APIs. They do not have special access to any private APIs that other directives can't access. > > Adding and removing classes with `[NgClass](../api/common/ngclass)` ------------------------------------------------------------------- Add or remove multiple CSS classes simultaneously with `[ngClass](../api/common/ngclass)`. > To add or remove a *single* class, use [class binding](class-binding) rather than `[NgClass](../api/common/ngclass)`. > > ### Using `[NgClass](../api/common/ngclass)` with an expression On the element you'd like to style, add `[[ngClass](../api/common/ngclass)]` and set it equal to an expression. In this case, `isSpecial` is a boolean set to `true` in `app.component.ts`. Because `isSpecial` is true, `[ngClass](../api/common/ngclass)` applies the class of `special` to the `<div>`. ``` <!-- toggle the "special" class on/off with a property --> <div [ngClass]="isSpecial ? 'special' : ''">This div is special</div> ``` ### Using `[NgClass](../api/common/ngclass)` with a method 1. To use `[NgClass](../api/common/ngclass)` with a method, add the method to the component class. In the following example, `setCurrentClasses()` sets the property `currentClasses` with an object that adds or removes three classes based on the `true` or `false` state of three other component properties. Each key of the object is a CSS class name. If a key is `true`, `[ngClass](../api/common/ngclass)` adds the class. If a key is `false`, `[ngClass](../api/common/ngclass)` removes the class. ``` currentClasses: Record<string, boolean> = {}; /* . . . */ setCurrentClasses() { // CSS classes: added/removed per current state of component properties this.currentClasses = { saveable: this.canSave, modified: !this.isUnchanged, special: this.isSpecial }; } ``` 2. In the template, add the `[ngClass](../api/common/ngclass)` property binding to `currentClasses` to set the element's classes: ``` <div [ngClass]="currentClasses">This div is initially saveable, unchanged, and special.</div> ``` For this use case, Angular applies the classes on initialization and in case of changes. The full example calls `setCurrentClasses()` initially with `ngOnInit()` and when the dependent properties change through a button click. These steps are not necessary to implement `[ngClass](../api/common/ngclass)`. For more information, see the live example `app.component.ts` and `app.component.html`. Setting inline styles with `[NgStyle](../api/common/ngstyle)` ------------------------------------------------------------- Use `[NgStyle](../api/common/ngstyle)` to set multiple inline styles simultaneously, based on the state of the component. 1. To use `[NgStyle](../api/common/ngstyle)`, add a method to the component class. In the following example, `setCurrentStyles()` sets the property `currentStyles` with an object that defines three styles, based on the state of three other component properties. ``` currentStyles: Record<string, string> = {}; /* . . . */ setCurrentStyles() { // CSS styles: set per current state of component properties this.currentStyles = { 'font-style': this.canSave ? 'italic' : 'normal', 'font-weight': !this.isUnchanged ? 'bold' : 'normal', 'font-size': this.isSpecial ? '24px' : '12px' }; } ``` 2. To set the element's styles, add an `[ngStyle](../api/common/ngstyle)` property binding to `currentStyles`. ``` <div [ngStyle]="currentStyles"> This div is initially italic, normal weight, and extra large (24px). </div> ``` For this use case, Angular applies the styles upon initialization and in case of changes. To do this, the full example calls `setCurrentStyles()` initially with `ngOnInit()` and when the dependent properties change through a button click. However, these steps are not necessary to implement `[ngStyle](../api/common/ngstyle)` on its own. See the live example `app.component.ts` and `app.component.html` for this optional implementation. Displaying and updating properties with `[ngModel](../api/forms/ngmodel)` ------------------------------------------------------------------------- Use the `[NgModel](../api/forms/ngmodel)` directive to display a data property and update that property when the user makes changes. 1. Import `[FormsModule](../api/forms/formsmodule)` and add it to the NgModule's `imports` list. ``` import { FormsModule } from '@angular/forms'; // <--- JavaScript import from Angular /* . . . */ @NgModule({ /* . . . */ imports: [ BrowserModule, FormsModule // <--- import into the NgModule ], /* . . . */ }) export class AppModule { } ``` 2. Add an `[([ngModel](../api/forms/ngmodel))]` binding on an HTML `<form>` element and set it equal to the property, here `name`. ``` <label for="example-ngModel">[(ngModel)]:</label> <input [(ngModel)]="currentItem.name" id="example-ngModel"> ``` This `[([ngModel](../api/forms/ngmodel))]` syntax can only set a data-bound property. To customize your configuration, write the expanded form, which separates the property and event binding. Use [property binding](property-binding) to set the property and [event binding](event-binding) to respond to changes. The following example changes the `<input>` value to uppercase: ``` <input [ngModel]="currentItem.name" (ngModelChange)="setUppercaseName($event)" id="example-uppercase"> ``` Here are all variations in action, including the uppercase version: ### `[NgModel](../api/forms/ngmodel)` and value accessors The `[NgModel](../api/forms/ngmodel)` directive works for an element supported by a [ControlValueAccessor](../api/forms/controlvalueaccessor). Angular provides *value accessors* for all of the basic HTML form elements. For more information, see [Forms](forms). To apply `[([ngModel](../api/forms/ngmodel))]` to a non-form built-in element or a third-party custom component, you have to write a value accessor. For more information, see the API documentation on [DefaultValueAccessor](../api/forms/defaultvalueaccessor). > When you write an Angular component, you don't need a value accessor or `[NgModel](../api/forms/ngmodel)` if you name the value and event properties according to Angular's [two-way binding syntax](two-way-binding#how-two-way-binding-works). > > Built-in structural directives ------------------------------ Structural directives are responsible for HTML layout. They shape or reshape the DOM's structure, typically by adding, removing, and manipulating the host elements to which they are attached. This section introduces the most common built-in structural directives: | Common built-in structural directives | Details | | --- | --- | | [`NgIf`](built-in-directives#ngIf) | Conditionally creates or disposes of subviews from the template. | | [`NgFor`](built-in-directives#ngFor) | Repeat a node for each item in a list. | | [`NgSwitch`](built-in-directives#ngSwitch) | A set of directives that switch among alternative views. | For more information, see [Structural Directives](structural-directives). Adding or removing an element with `[NgIf](../api/common/ngif)` --------------------------------------------------------------- Add or remove an element by applying an `[NgIf](../api/common/ngif)` directive to a host element. When `[NgIf](../api/common/ngif)` is `false`, Angular removes an element and its descendants from the DOM. Angular then disposes of their components, which frees up memory and resources. To add or remove an element, bind `*[ngIf](../api/common/ngif)` to a condition expression such as `isActive` in the following example. ``` <app-item-detail *ngIf="isActive" [item]="item"></app-item-detail> ``` When the `isActive` expression returns a truthy value, `[NgIf](../api/common/ngif)` adds the `ItemDetailComponent` to the DOM. When the expression is falsy, `[NgIf](../api/common/ngif)` removes the `ItemDetailComponent` from the DOM and disposes of the component and all of its subcomponents. For more information on `[NgIf](../api/common/ngif)` and `NgIfElse`, see the [NgIf API documentation](../api/common/ngif). ### Guarding against `null` By default, `[NgIf](../api/common/ngif)` prevents display of an element bound to a null value. To use `[NgIf](../api/common/ngif)` to guard a `<div>`, add `*[ngIf](../api/common/ngif)="yourProperty"` to the `<div>`. In the following example, the `currentCustomer` name appears because there is a `currentCustomer`. ``` <div *ngIf="currentCustomer">Hello, {{currentCustomer.name}}</div> ``` However, if the property is `null`, Angular does not display the `<div>`. In this example, Angular does not display the `nullCustomer` because it is `null`. ``` <div *ngIf="nullCustomer">Hello, <span>{{nullCustomer}}</span></div> ``` Listing items with `[NgFor](../api/common/ngfor)` ------------------------------------------------- Use the `[NgFor](../api/common/ngfor)` directive to present a list of items. 1. Define a block of HTML that determines how Angular renders a single item. 2. To list your items, assign the shorthand `let item of items` to `*[ngFor](../api/common/ngfor)`. ``` <div *ngFor="let item of items">{{item.name}}</div> ``` The string `"let item of items"` instructs Angular to do the following: * Store each item in the `items` array in the local `item` looping variable * Make each item available to the templated HTML for each iteration * Translate `"let item of items"` into an `[<ng-template>](../api/core/ng-template)` around the host element * Repeat the `[<ng-template>](../api/core/ng-template)` for each `item` in the list For more information see the [Structural directive shorthand](structural-directives#shorthand) section of [Structural directives](structural-directives). ### Repeating a component view To repeat a component element, apply `*[ngFor](../api/common/ngfor)` to the selector. In the following example, the selector is `<app-item-detail>`. ``` <app-item-detail *ngFor="let item of items" [item]="item"></app-item-detail> ``` Reference a template input variable, such as `item`, in the following locations: * Within the `[ngFor](../api/common/ngfor)` host element * Within the host element descendants to access the item's properties The following example references `item` first in an interpolation and then passes in a binding to the `item` property of the `<app-item-detail>` component. ``` <div *ngFor="let item of items">{{item.name}}</div> <!-- . . . --> <app-item-detail *ngFor="let item of items" [item]="item"></app-item-detail> ``` For more information about template input variables, see [Structural directive shorthand](structural-directives#shorthand). ### Getting the `index` of `*[ngFor](../api/common/ngfor)` Get the `index` of `*[ngFor](../api/common/ngfor)` in a template input variable and use it in the template. In the `*[ngFor](../api/common/ngfor)`, add a semicolon and `let i=index` to the shorthand. The following example gets the `index` in a variable named `i` and displays it with the item name. ``` <div *ngFor="let item of items; let i=index">{{i + 1}} - {{item.name}}</div> ``` The index property of the `[NgFor](../api/common/ngfor)` directive context returns the zero-based index of the item in each iteration. Angular translates this instruction into an `[<ng-template>](../api/core/ng-template)` around the host element, then uses this template repeatedly to create a new set of elements and bindings for each `item` in the list. For more information about shorthand, see the [Structural Directives](structural-directives#shorthand) guide. Repeating elements when a condition is true ------------------------------------------- To repeat a block of HTML when a particular condition is true, put the `*[ngIf](../api/common/ngif)` on a container element that wraps an `*[ngFor](../api/common/ngfor)` element. For more information see [one structural directive per element](structural-directives#one-per-element). ### Tracking items with `*[ngFor](../api/common/ngfor)` `trackBy` Reduce the number of calls your application makes to the server by tracking changes to an item list. With the `*[ngFor](../api/common/ngfor)` `trackBy` property, Angular can change and re-render only those items that have changed, rather than reloading the entire list of items. 1. Add a method to the component that returns the value `[NgFor](../api/common/ngfor)` should track. In this example, the value to track is the item's `id`. If the browser has already rendered `id`, Angular keeps track of it and doesn't re-query the server for the same `id`. ``` trackByItems(index: number, item: Item): number { return item.id; } ``` 2. In the shorthand expression, set `trackBy` to the `trackByItems()` method. ``` <div *ngFor="let item of items; trackBy: trackByItems"> ({{item.id}}) {{item.name}} </div> ``` **Change ids** creates new items with new `item.id`s. In the following illustration of the `trackBy` effect, **Reset items** creates new items with the same `item.id`s. * With no `trackBy`, both buttons trigger complete DOM element replacement. * With `trackBy`, only changing the `id` triggers element replacement. ![Animation of trackBy](https://angular.io/generated/images/guide/built-in-directives/ngfor-trackby.gif) Hosting a directive without a DOM element ----------------------------------------- The Angular `[<ng-container>](../api/core/ng-container)` is a grouping element that doesn't interfere with styles or layout because Angular doesn't put it in the DOM. Use `[<ng-container>](../api/core/ng-container)` when there's no single element to host the directive. Here's a conditional paragraph using `[<ng-container>](../api/core/ng-container)`. ``` <p> I turned the corner <ng-container *ngIf="hero"> and saw {{hero.name}}. I waved </ng-container> and continued on my way. </p> ``` 1. Import the `[ngModel](../api/forms/ngmodel)` directive from `[FormsModule](../api/forms/formsmodule)`. 2. Add `[FormsModule](../api/forms/formsmodule)` to the imports section of the relevant Angular module. 3. To conditionally exclude an `<option>`, wrap the `<option>` in an `[<ng-container>](../api/core/ng-container)`. ``` <div> Pick your favorite hero (<label><input type="checkbox" checked (change)="showSad = !showSad">show sad</label>) </div> <select [(ngModel)]="hero"> <ng-container *ngFor="let h of heroes"> <ng-container *ngIf="showSad || h.emotion !== 'sad'"> <option [ngValue]="h">{{h.name}} ({{h.emotion}})</option> </ng-container> </ng-container> </select> ``` Switching cases with `[NgSwitch](../api/common/ngswitch)` --------------------------------------------------------- Like the JavaScript `switch` statement, `[NgSwitch](../api/common/ngswitch)` displays one element from among several possible elements, based on a switch condition. Angular puts only the selected element into the DOM. `[NgSwitch](../api/common/ngswitch)` is a set of three directives: | `[NgSwitch](../api/common/ngswitch)` directives | Details | | --- | --- | | `[NgSwitch](../api/common/ngswitch)` | An attribute directive that changes the behavior of its companion directives. | | `[NgSwitchCase](../api/common/ngswitchcase)` | Structural directive that adds its element to the DOM when its bound value equals the switch value and removes its bound value when it doesn't equal the switch value. | | `[NgSwitchDefault](../api/common/ngswitchdefault)` | Structural directive that adds its element to the DOM when there is no selected `[NgSwitchCase](../api/common/ngswitchcase)`. | 1. On an element, such as a `<div>`, add `[[ngSwitch](../api/common/ngswitch)]` bound to an expression that returns the switch value, such as `feature`. Though the `feature` value in this example is a string, the switch value can be of any type. 2. Bind to `*[ngSwitchCase](../api/common/ngswitchcase)` and `*[ngSwitchDefault](../api/common/ngswitchdefault)` on the elements for the cases. ``` <div [ngSwitch]="currentItem.feature"> <app-stout-item *ngSwitchCase="'stout'" [item]="currentItem"></app-stout-item> <app-device-item *ngSwitchCase="'slim'" [item]="currentItem"></app-device-item> <app-lost-item *ngSwitchCase="'vintage'" [item]="currentItem"></app-lost-item> <app-best-item *ngSwitchCase="'bright'" [item]="currentItem"></app-best-item> <!-- . . . --> <app-unknown-item *ngSwitchDefault [item]="currentItem"></app-unknown-item> </div> ``` 3. In the parent component, define `currentItem`, to use it in the `[[ngSwitch](../api/common/ngswitch)]` expression. ``` currentItem!: Item; ``` 4. In each child component, add an `item` [input property](inputs-outputs#input "Input property") which is bound to the `currentItem` of the parent component. The following two snippets show the parent component and one of the child components. The other child components are identical to `StoutItemComponent`. ``` export class StoutItemComponent { @Input() item!: Item; } ``` Switch directives also work with built-in HTML elements and web components. For example, you could replace the `<app-best-item>` switch case with a `<div>` as follows. ``` <div *ngSwitchCase="'bright'"> Are you as bright as {{currentItem.name}}?</div> ``` What's next ----------- For information on how to build your own custom directives, see [Attribute Directives](attribute-directives) and [Structural Directives](structural-directives). Last reviewed on Mon Feb 28 2022
programming_docs
angular Angular versioning and releases Angular versioning and releases =============================== We recognize that you need stability from the Angular framework. Stability ensures that reusable components and libraries, tutorials, tools, and learned practices don't become obsolete unexpectedly. Stability is essential for the ecosystem around Angular to thrive. We also share with you the need for Angular to keep evolving. We strive to ensure that the foundation on top of which you are building is continuously improving and enabling you to stay up-to-date with the rest of the web ecosystem and your user needs. This document contains the practices that we follow to provide you with a leading-edge application development platform, balanced with stability. We strive to ensure that future changes are always introduced in a predictable way. We want everyone who depends on Angular to know when and how new features are added, and to be well-prepared when obsolete ones are removed. > The practices described in this document apply to Angular 2.0 and later. If you are currently using AngularJS, see [Upgrading from AngularJS](upgrade "Upgrading from Angular JS"). *AngularJS* is the name for all v1.x versions of Angular. > > Angular versioning ------------------ Angular version numbers indicate the level of changes that are introduced by the release. This use of [semantic versioning](https://semver.org/ "Semantic Versioning Specification") helps you understand the potential impact of updating to a new version. Angular version numbers have three parts: `major.minor.patch`. For example, version 7.2.11 indicates major version 7, minor version 2, and patch level 11. The version number is incremented based on the level of change included in the release. | Level of change | Details | | --- | --- | | Major release | Contains significant new features, some but minimal developer assistance is expected during the update. When updating to a new major release, you might need to run update scripts, refactor code, run additional tests, and learn new APIs. | | Minor release | Contains new smaller features. Minor releases are fully backward-compatible; no developer assistance is expected during update, but you can optionally modify your applications and libraries to begin using new APIs, features, and capabilities that were added in the release. We update peer dependencies in minor versions by expanding the supported versions, but we do not require projects to update these dependencies. | | Patch release | Low risk, bug fix release. No developer assistance is expected during update. | > **NOTE**: As of Angular version 7, the major versions of Angular core and the CLI are aligned. This means that in order to use the CLI as you develop an Angular app, the version of `@angular/core` and the CLI need to be the same. > > ### Supported update paths You can `ng update` to any version of Angular, provided that the following criteria are met: * The version you want to update *to* is supported. * The version you want to update *from* is within one major version of the version you want to upgrade to. For example, you can update from version 11 to version 12, provided that version 12 is still supported. If you want to update across multiple major versions, perform each update one major version at a time. For example, to update from version 10 to version 12: 1. Update from version 10 to version 11. 2. Update from version 11 to version 12. See [Keeping Up-to-Date](updating "Updating your projects") for more information about updating your Angular projects to the most recent version. ### Preview releases We let you preview what's coming by providing "Next" and Release Candidates (`rc`) pre-releases for each major and minor release: | Pre-release type | Details | | --- | --- | | Next | The release that is under active development and testing. The next release is indicated by a release tag appended with the `-next` identifier, such as `8.1.0-next.0`. | | Release candidate | A release that is feature complete and in final testing. A release candidate is indicated by a release tag appended with the `-rc` identifier, such as version `8.1.0-rc.0`. | The latest `next` or `rc` pre-release version of the documentation is available at [next.angular.io](https://next.angular.io). Release frequency ----------------- We work toward a regular schedule of releases, so that you can plan and coordinate your updates with the continuing evolution of Angular. > Dates are offered as general guidance and are subject to change. > > In general, expect the following release cycle: * A major release every 6 months * 1-3 minor releases for each major release * A patch release and pre-release (`next` or `rc`) build almost every week This cadence of releases gives eager developers access to new features as soon as they are fully developed and pass through our code review and integration testing processes, while maintaining the stability and reliability of the platform for production users that prefer to receive features after they have been validated by Google and other developers that use the pre-release builds. Support policy and schedule --------------------------- > Dates are offered as general guidance and are subject to change. > > ### Release schedule | Version | Date | | --- | --- | | v15.1 | Week of 2023-01-09 | | v15.2 | Week of 2023-02-20 | | v16.0 | Week of 2023-05-01 | ### Support window All major releases are typically supported for 18 months. | Support stage | Support Timing | Details | | --- | --- | --- | | Active | 6 months | Regularly-scheduled updates and patches are released | | Long-term (LTS) | 12 months | Only [critical fixes and security patches](releases#lts-fixes) are released | ### Actively supported versions The following table provides the status for Angular versions under support. | Version | Status | Released | Active ends | LTS ends | | --- | --- | --- | --- | --- | | ^15.0.0 | Active | 2022-11-18 | 2023-05-18 | 2024-05-18 | | ^14.0.0 | LTS | 2022-06-02 | 2022-11-18 | 2023-11-18 | | ^13.0.0 | LTS | 2021-11-04 | 2022-06-02 | 2023-05-04 | Angular versions v2 to v12 are no longer under support. ### LTS fixes As a general rule, a fix is considered for an LTS version if it resolves one of: * A newly identified security vulnerability, * A regression, since the start of LTS, caused by a 3rd party change, such as a new browser version. Deprecation practices --------------------- Sometimes "breaking changes", such as the removal of support for select APIs and features, are necessary to innovate and stay current with new best practices, changing dependencies, or changes in the (web) platform itself. To make these transitions as straightforward as possible, we make these commitments to you: * We work hard to minimize the number of breaking changes and to provide migration tools when possible * We follow the deprecation policy described here, so you have time to update your applications to the latest APIs and best practices To help ensure that you have sufficient time and a clear path to update, this is our deprecation policy: | Deprecation stages | Details | | --- | --- | | Announcement | We announce deprecated APIs and features in the [change log](https://github.com/angular/angular/blob/main/CHANGELOG.md "Angular change log"). Deprecated APIs appear in the [documentation](api?status=deprecated) with ~~strikethrough~~. When we announce a deprecation, we also announce a recommended update path. For convenience, [Deprecations](deprecations) contains a summary of deprecated APIs and features. | | Deprecation period | When an API or a feature is deprecated, it is still present in the next two major releases. After that, deprecated APIs and features are candidates for removal. A deprecation can be announced in any release, but the removal of a deprecated API or feature happens only in major release. Until a deprecated API or feature is removed, it is maintained according to the LTS support policy, meaning that only critical and security issues are fixed. | | npm dependencies | We only make npm dependency updates that require changes to your applications in a major release. In minor releases, we update peer dependencies by expanding the supported versions, but we do not require projects to update these dependencies until a future major version. This means that during minor Angular releases, npm dependency updates within Angular applications and libraries are optional. | Public API surface ------------------ Angular is a collection of many packages, subprojects, and tools. To prevent accidental use of private APIs and so that you can clearly understand what is covered by the practices described here — we document what is and is not considered our public API surface. For details, see [Supported Public API Surface of Angular](https://github.com/angular/angular/blob/main/docs/PUBLIC_API.md "Supported Public API Surface of Angular"). Any changes to the public API surface are done using the versioning, support, and depreciation policies previously described. Developer Preview ----------------- Occasionally we introduce new APIs under the label of "Developer Preview". These are APIs that are fully functional and polished, but that we are not ready to stabilize under our normal deprecation policy. This may be because we want to gather feedback from real applications before stabilization, or because the associated documentation or migration tooling is not fully complete. The policies and practices that are described in this document do not apply to APIs marked as Developer Preview. Such APIs can change at any time, even in new patch versions of the framework. Teams should decide for themselves whether the benefits of using Developer Preview APIs are worth the risk of breaking changes outside of our normal use of semantic versioning. Last reviewed on Mon Nov 21 2022 angular Cheat Sheet Cheat Sheet =========== | Bootstrapping | Details | | --- | --- | | ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; ``` | Import `[platformBrowserDynamic](../api/platform-browser-dynamic/platformbrowserdynamic)` from `@angular/platform-browser-dynamic`. | | ``` platformBrowserDynamic().bootstrapModule(AppModule); ``` | Bootstraps the application, using the root component from the specified `[NgModule](../api/core/ngmodule)`. | | NgModules | Details | | --- | --- | | ``` import { NgModule } from '@angular/core'; ``` | Import `[NgModule](../api/core/ngmodule)` from `@angular/core`. | | ``` @NgModule({   declarations: …,   imports: …,   exports: …,   providers: …,   bootstrap: … }) class MyModule {} ``` | Defines a module that contains components, directives, pipes, and providers. | | ``` declarations: [   MyRedComponent,   MyBlueComponent,   MyDatePipe ] ``` | List of components, directives, and pipes that belong to this module. | | ``` imports: [   BrowserModule,   SomeOtherModule ] ``` | List of modules to import into this module. Everything from the imported modules is available to `declarations` of this module. | | ``` exports: [   MyRedComponent,   MyDatePipe ] ``` | List of components, directives, and pipes visible to modules that import this module. | | ``` providers: [   MyService,   { provide: … } ] ``` | List of dependency injection providers visible both to the contents of this module and to importers of this module. | | ``` bootstrap: [MyAppComponent] ``` | List of components to bootstrap when this module is bootstrapped. | | Template syntax | Details | | --- | --- | | ``` <input [value]="firstName"> ``` | Binds property `value` to the result of expression `firstName`. | | ``` <div [attr.role]="myAriaRole"> ``` | Binds attribute `role` to the result of expression `myAriaRole`. | | ``` <div [class.extra-sparkle]="isDelightful"> ``` | Binds the presence of the CSS class `extra-sparkle` on the element to the truthiness of the expression `isDelightful`. | | ``` <div [style.width.px]="mySize"> ``` | Binds style property `width` to the result of expression `mySize` in pixels. Units are optional. | | ``` <button (click)="readRainbow($event)"> ``` | Calls method `readRainbow` when a click event is triggered on this button element (or its children) and passes in the event object. | | ``` <div title="Hello {{ponyName}}"> ``` | Binds a property to an interpolated string, for example, "Hello Seabiscuit". Equivalent to: ``` <div [title]="'Hello ' + ponyName"> ``` | | ``` <p>   Hello {{ponyName}} </p> ``` | Binds text content to an interpolated string, for example, "Hello Seabiscuit". | | ``` <my-cmp [(title)]="name"> ``` | Sets up two-way data binding. Equivalent to: ``` <my-cmp [title]="name" (titleChange)="name=$event"> ``` | | ``` <video #movieplayer …></video> <button (click)="movieplayer.play()">   Play </button> ``` | Creates a local variable `movieplayer` that provides access to the `video` element instance in data-binding and event-binding expressions in the current template. | | ``` <p *myUnless="myExpression">   … </p> ``` | The asterisk (`*`) character turns the current element into an embedded template. Equivalent to: ``` <ng-template [myUnless]="myExpression">   <p>     …   </p> </ng-template> ``` | | ``` <p>   Card No.: {{cardNumber | myCardNumberFormatter}} </p> ``` | Transforms the current value of expression `cardNumber` using the pipe called `myCardNumberFormatter`. | | ``` <p>   Employer: {{employer?.companyName}} </p> ``` | The safe navigation operator (`?`) means that the `employer` field is optional and if `undefined`, the rest of the expression should be ignored. | | ``` <svg:rect x="0"           y="0"           width="100"           height="100"/> ``` | An SVG snippet template needs an `svg:` prefix on its root element to disambiguate the SVG element from an HTML component. | | ``` <svg>   <rect x="0"         y="0"         width="100"         height="100"/> </svg> ``` | An `<svg>` root element is detected as an SVG element automatically, without the prefix. | | Built-in directives | Details | | --- | --- | | ``` import { CommonModule } from '@angular/common'; ``` | Import `[CommonModule](../api/common/commonmodule)` from `@angular/common`. | | ``` <section *ngIf="showSection"> ``` | Removes or recreates a portion of the DOM tree based on the `showSection` expression. | | ``` <li *ngFor="let item of list"> ``` | Turns the `li` element and its contents into a template, and uses that to instantiate a view for each item in list. | | ``` <div [ngSwitch]="conditionExpression">   <ng-template [ngSwitchCase]="case1Exp">     …   </ng-template>   <ng-template ngSwitchCase="case2LiteralString">     …   </ng-template>   <ng-template ngSwitchDefault>     …   </ng-template> </div> ``` | Conditionally swaps the contents of the `div` by selecting one of the embedded templates based on the current value of `conditionExpression`. | | ``` <div [ngClass]="{'active': isActive,                  'disabled': isDisabled}"> ``` | Binds the presence of CSS classes on the element to the truthiness of the associated map values. The right-hand expression should return `{class-name: true/false}` map. | | ``` <div [ngStyle]="{'property': 'value'}"> <div [ngStyle]="dynamicStyles()"> ``` | Allows you to assign styles to an HTML element using CSS. You can use CSS directly, as in the first example, or you can call a method from the component. | | Forms | Details | | --- | --- | | ``` import { FormsModule } from '@angular/forms'; ``` | Import `[FormsModule](../api/forms/formsmodule)` from `@angular/forms`. | | ``` <input [(ngModel)]="userName"> ``` | Provides two-way data-binding, parsing, and validation for form controls. | | Class decorators | Details | | --- | --- | | ``` import { Directive, … } from '@angular/core'; ``` | Import `[Directive](../api/core/directive), &hellip;` from `@angular/core';`. | | ``` @Component({…}) class MyComponent() {} ``` | Declares that a class is a component and provides metadata about the component. | | ``` @Directive({…}) class MyDirective() {} ``` | Declares that a class is a directive and provides metadata about the directive. | | ``` @Pipe({…}) class MyPipe() {} ``` | Declares that a class is a pipe and provides metadata about the pipe. | | ``` @Injectable() class MyService() {} ``` | Declares that a class can be provided and injected by other classes. Without this decorator, the compiler won't generate enough metadata to allow the class to be created properly when it's injected somewhere. | | Directive configuration | Details | | --- | --- | | ``` @Directive({   property1: value1,   … }) ``` | Add `property1` property with `value1` value to Directive. | | ``` selector: '.cool-button:not(a)' ``` | Specifies a CSS selector that identifies this directive within a template. Supported selectors include `element`, `[attribute]`, `.class`, and `:not()`. Does not support parent-child relationship selectors. | | ``` providers: [   MyService,   { provide: … } ] ``` | List of dependency injection providers for this directive and its children. | | Component configuration `@[Component](../api/core/component)` extends `@[Directive](../api/core/directive)`, so the `@[Directive](../api/core/directive)` configuration applies to components as well | Details | | --- | --- | | ``` moduleId: module.id ``` | If set, the `templateUrl` and `styleUrl` are resolved relative to the component. | | ``` viewProviders: [MyService, { provide: … }] ``` | List of dependency injection providers scoped to this component's view. | | ``` template: 'Hello {{name}}' templateUrl: 'my-component.html' ``` | Inline template or external template URL of the component's view. | | ``` styles: ['.primary {color: red}'] styleUrls: ['my-component.css'] ``` | List of inline CSS styles or external stylesheet URLs for styling the component's view. | | Class field decorators for directives and components | Details | | --- | --- | | ``` import { Input, … } from '@angular/core'; ``` | Import `[Input](../api/core/input), ...` from `@angular/core`. | | ``` @Input() myProperty; ``` | Declares an input property that you can update using property binding (example: `<my-cmp [myProperty]="someExpression">`). | | ``` @Output() myEvent = new EventEmitter(); ``` | Declares an output property that fires events that you can subscribe to with an event binding (example: `<my-cmp (myEvent)="doSomething()">`). | | ``` @HostBinding('class.valid') isValid; ``` | Binds a host element property (here, the CSS class `valid`) to a directive/component property (`isValid`). | | ``` @HostListener('click', ['$event']) onClick(e) {…} ``` | Subscribes to a host element event (`click`) with a directive/component method (`onClick`), optionally passing an argument (`$event`). | | ``` @ContentChild(myPredicate) myChildComponent; ``` | Binds the first result of the component content query (`myPredicate`) to a property (`myChildComponent`) of the class. | | ``` @ContentChildren(myPredicate) myChildComponents; ``` | Binds the results of the component content query (`myPredicate`) to a property (`myChildComponents`) of the class. | | ``` @ViewChild(myPredicate) myChildComponent; ``` | Binds the first result of the component view query (`myPredicate`) to a property (`myChildComponent`) of the class. Not available for directives. | | ``` @ViewChildren(myPredicate) myChildComponents; ``` | Binds the results of the component view query (`myPredicate`) to a property (`myChildComponents`) of the class. Not available for directives. | | Directive and component change detection and lifecycle hooks (implemented as class methods) | Details | | --- | --- | | ``` constructor(myService: MyService, …) { … } ``` | Called before any other lifecycle hook. Use it to inject dependencies, but avoid any serious work here. | | ``` ngOnChanges(changeRecord) { … } ``` | Called after every change to input properties and before processing content or child views. | | ``` ngOnInit() { … } ``` | Called after the constructor, initializing input properties, and the first call to `ngOnChanges`. | | ``` ngDoCheck() { … } ``` | Called every time that the input properties of a component or a directive are checked. Use it to extend change detection by performing a custom check. | | ``` ngAfterContentInit() { … } ``` | Called after `ngOnInit` when the component's or directive's content has been initialized. | | ``` ngAfterContentChecked() { … } ``` | Called after every check of the component's or directive's content. | | ``` ngAfterViewInit() { … } ``` | Called after `ngAfterContentInit` when the component's views and child views / the view that a directive is in has been initialized. | | ``` ngAfterViewChecked() { … } ``` | Called after every check of the component's views and child views / the view that a directive is in. | | ``` ngOnDestroy() { … } ``` | Called once, before the instance is destroyed. | | Dependency injection configuration | Details | | --- | --- | | ``` { provide: MyService, useClass: MyMockService } ``` | Sets or overrides the provider for `MyService` to the `MyMockService` class. | | ``` { provide: MyService, useFactory: myFactory } ``` | Sets or overrides the provider for `MyService` to the `myFactory` factory function. | | ``` { provide: MyValue, useValue: 41 } ``` | Sets or overrides the provider for `MyValue` to the value `41`. | | Routing and navigation | Details | | --- | --- | | ``` import { Routes, RouterModule, … } from '@angular/router'; ``` | Import `[Routes](../api/router/routes), [RouterModule](../api/router/routermodule), ...` from `@angular/router`. | | ``` const routes: Routes = [   { path: '', component: HomeComponent },   { path: 'path/:routeParam', component: MyComponent },   { path: 'staticPath', component: … },   { path: '**', component: … },   { path: 'oldPath', redirectTo: '/staticPath' },   { path: …, component: …, data: { message: 'Custom' } } ]); const routing = RouterModule.forRoot(routes); ``` | Configures routes for the application. Supports static, parameterized, redirect, and wildcard routes. Also supports custom route data and resolve. | | ``` <router-outlet></router-outlet> <router-outlet name="aux"></router-outlet> ``` | Marks the location to load the component of the active route. | | ``` <a routerLink="/path"> <a [routerLink]="[ '/path', routeParam ]"> <a [routerLink]="[ '/path', { matrixParam: 'value' } ]"> <a [routerLink]="[ '/path' ]" [queryParams]="{ page: 1 }"> <a [routerLink]="[ '/path' ]" fragment="anchor"> ``` | Creates a link to a different view based on a route instruction consisting of a route path, required and optional parameters, query parameters, and a fragment. To navigate to a root route, use the `/` prefix; for a child route, use the `./`prefix; for a sibling or parent, use the `../` prefix. | | ``` <a [routerLink]="[ '/path' ]" routerLinkActive="active"> ``` | The provided classes are added to the element when the `[routerLink](../api/router/routerlink)` becomes the current active route. | | ``` <a [routerLink]="[ '/path' ]" routerLinkActive="active" ariaCurrentWhenActive="page"> ``` | The provided classes and `aria-current` attribute are added to the element when the `[routerLink](../api/router/routerlink)` becomes the current active route. | | ``` function canActivateGuard: CanActivateFn =   (     route: ActivatedRouteSnapshot,     state: RouterStateSnapshot   ) => { … } { path: …, canActivate: [canActivateGuard] } ``` | An interface for defining a function that the router should call first to determine if it should activate this component. Should return a `boolean|[UrlTree](../api/router/urltree)` or an Observable/Promise that resolves to a `boolean|[UrlTree](../api/router/urltree)`. | | ``` function canDeactivateGuard: CanDeactivateFn<T> =   (     component: T,     route: ActivatedRouteSnapshot,     state: RouterStateSnapshot   ) => { … } { path: …, canDeactivate: [canDeactivateGuard] } ``` | An interface for defining a function that the router should call first to determine if it should deactivate this component after a navigation. Should return a `boolean|[UrlTree](../api/router/urltree)` or an Observable/Promise that resolves to a `boolean|[UrlTree](../api/router/urltree)`. | | ``` function canActivateChildGuard: CanActivateChildFn =   (     route: ActivatedRouteSnapshot,     state: RouterStateSnapshot   ) => { … } { path: …, canActivateChild: [canActivateGuard], children: … } ``` | An interface for defining a function that the router should call first to determine if it should activate the child route. Should return a `boolean|[UrlTree](../api/router/urltree)` or an Observable/Promise that resolves to a `boolean|[UrlTree](../api/router/urltree)`. | | ``` function resolveGuard implements ResolveFn<T> =   (     route: ActivatedRouteSnapshot,     state: RouterStateSnapshot   ) => { … } { path: …, resolve: [resolveGuard] } ``` | An interface for defining a function that the router should call first to resolve route data before rendering the route. Should return a value or an Observable/Promise that resolves to a value. | | ``` function canLoadGuard: CanLoadFn =   (     route: Route   ) => { … } { path: …, canLoad: [canLoadGuard], loadChildren: … } ``` | An interface for defining a function that the router should call first to check if the lazy loaded module should be loaded. Should return a `boolean|[UrlTree](../api/router/urltree)` or an Observable/Promise that resolves to a `boolean|[UrlTree](../api/router/urltree)`. | Last reviewed on Mon Feb 28 2022
programming_docs
angular Guidelines for creating NgModules Guidelines for creating NgModules ================================= This topic provides a conceptual overview of the different categories of [NgModules](glossary#ngmodule "Definition of NgModule") you can create in order to organize your code in a modular structure. These categories are not cast in stone —they are suggestions. You may want to create NgModules for other purposes, or combine the characteristics of some of these categories. NgModules are a great way to organize an application and keep code related to a specific functionality or feature separate from other code. Use NgModules to consolidate [components](glossary#component "Definition of component"), [directives](glossary#directive "Definition of directive"), and [pipes](glossary#pipe "Definition of pipe") into cohesive blocks of functionality. Focus each block on a feature or business domain, a workflow or navigation flow, a common collection of utilities, or one or more [providers](glossary#provider "Definition of provider") for [services](glossary#service "Definition of service"). For more about NgModules, see [Organizing your app with NgModules](ngmodules "Organizing your app with NgModules"). > For the example application used in NgModules-related topics, see the . > > Summary of NgModule categories ------------------------------ All applications start by [bootstrapping a root NgModule](bootstrapping "Launching an app with a root NgModule"). You can organize your other NgModules any way you want. This topic provides some guidelines for the following general categories of NgModules: | Category | Details | | --- | --- | | [Domain](module-types#domain) | Is organized around a feature, business domain, or user experience. | | [Routed](module-types#routed) | Is the top component of the NgModule. Acts as the destination of a [router](glossary#router "Definition of router") navigation route. | | [Routing](module-types#routing) | Provides the routing configuration for another NgModule. | | [Service](module-types#service) | Provides utility services such as data access and messaging. | | [Widget](module-types#widget) | Makes a component, directive, or pipe available to other NgModules. | | [Shared](module-types#shared) | Makes a set of components, directives, and pipes available to other NgModules. | The following table summarizes the key characteristics of each category. | NgModule | Declarations | Providers | Exports | Imported by | | --- | --- | --- | --- | --- | | Domain | Yes | Rare | Top component | Another domain, `AppModule` | | Routed | Yes | Rare | No | None | | Routing | No | Yes (Guards) | RouterModule | Another domain (for routing) | | Service | No | Yes | No | `AppModule` | | Widget | Yes | Rare | Yes | Another domain | | Shared | Yes | No | Yes | Another domain | Domain NgModules ---------------- Use a domain NgModule to deliver a user experience dedicated to a particular feature or application domain, such as editing a customer or placing an order. One example is `ContactModule` in the . A domain NgModule organizes the code related to a certain function, containing all of the components, routing, and templates that make up the function. Your top component in the domain NgModule acts as the feature or domain's root, and is the only component you export. Private supporting subcomponents descend from it. Import a domain NgModule exactly once into another NgModule, such as a domain NgModule, or into the root NgModule (`AppModule`) of an application that contains only a few NgModules. Domain NgModules consist mostly of declarations. You rarely include providers. If you do, the lifetime of the provided services should be the same as the lifetime of the NgModule. > For more information about lifecycles, see [Hooking into the component lifecycle](lifecycle-hooks "Hooking into the component lifecycle"). > > Routed NgModules ---------------- Use a routed NgModule for all [lazy-loaded NgModules](lazy-loading-ngmodules "Lazy-loading an NgModule"). Use the top component of the NgModule as the destination of a router navigation route. Routed NgModules don't export anything because their components never appear in the template of an external component. Don't import a lazy-loaded routed NgModule into another NgModule, as this would trigger an eager load, defeating the purpose of lazy loading. Routed NgModules rarely have providers because you load a routed NgModule only when needed (such as for routing). Services listed in the NgModules' `provider` array would not be available because the root injector wouldn't know about the lazy-loaded NgModule. If you include providers, the lifetime of the provided services should be the same as the lifetime of the NgModule. Don't provide app-wide [singleton services](singleton-services) in a routed NgModule or in an NgModule that the routed NgModule imports. > For more information about providers and lazy-loaded routed NgModules, see [Limiting provider scope](providers#limiting-provider-scope-by-lazy-loading-modules "Providing dependencies: Limiting provider scope"). > > Routing NgModules ----------------- Use a routing NgModule to provide the routing configuration for a domain NgModule, thereby separating routing concerns from its companion domain NgModule. One example is `ContactRoutingModule` in the , which provides the routing for its companion domain NgModule `ContactModule`. > For an overview and details about routing, see [In-app navigation: routing to views](router "In-app navigation: routing to views"). > > Use a routing NgModule to do the following tasks: * Define routes * Add router configuration to the NgModule's import * Add guard and resolver service providers to the NgModule's providers The name of the routing NgModule should parallel the name of its companion NgModule, using the suffix `Routing`. For example, `ContactModule` in `contact.module.ts` has a routing NgModule named `ContactRoutingModule` in `contact-routing.module.ts`. Import a routing NgModule only into its companion NgModule. If the companion NgModule is the root `AppModule`, the `AppRoutingModule` adds router configuration to its imports with `RouterModule.forRoot(routes)`. All other routing NgModules are children that import `RouterModule.forChild(routes)`. In your routing NgModule, re-export the `[RouterModule](../api/router/routermodule)` as a convenience so that components of the companion NgModule have access to router directives such as `[RouterLink](../api/router/routerlink)` and `[RouterOutlet](../api/router/routeroutlet)`. Don't use declarations in a routing NgModule. Components, directives, and pipes are the responsibility of the companion domain NgModule, not the routing NgModule. Service NgModules ----------------- Use a service NgModule to provide a utility service such as data access or messaging. Ideal service NgModules consist entirely of providers and have no declarations. Angular's `[HttpClientModule](../api/common/http/httpclientmodule)` is a good example of a service NgModule. Use only the root `AppModule` to import service NgModules. Widget NgModules ---------------- Use a widget NgModule to make a component, directive, or pipe available to external NgModules. Import widget NgModules into any NgModules that need the widgets in their templates. Many third-party UI component libraries are provided as widget NgModules. A widget NgModule should consist entirely of declarations, most of them exported. It would rarely have providers. Shared NgModules ---------------- Put commonly used directives, pipes, and components into one NgModule, typically named `SharedModule`, and then import just that NgModule wherever you need it in other parts of your application. You can import the shared NgModule in your domain NgModules, including [lazy-loaded NgModules](lazy-loading-ngmodules "Lazy-loading an NgModule"). One example is `SharedModule` in the , which provides the `AwesomePipe` custom pipe and `HighlightDirective` directive. Shared NgModules should not include providers, nor should any of its imported or re-exported NgModules include providers. To learn how to use shared modules to organize and streamline your code, see [Sharing NgModules in an app](sharing-ngmodules "Sharing NgModules in an app"). Next steps ---------- You may also be interested in the following: * For more about NgModules, see [Organizing your app with NgModules](ngmodules "Organizing your app with NgModules") * To learn more about the root NgModule, see [Launching an app with a root NgModule](bootstrapping "Launching an app with a root NgModule") * To learn about frequently used Angular NgModules and how to import them into your app, see [Frequently-used modules](frequent-ngmodules "Frequently-used modules") * For a complete description of the NgModule metadata properties, see [Using the NgModule metadata](ngmodule-api "Using the NgModule metadata") If you want to manage NgModule loading and the use of dependencies and services, see the following: * To learn about loading NgModules eagerly when the application starts, or lazy-loading NgModules asynchronously by the router, see [Lazy-loading feature modules](lazy-loading-ngmodules) * To understand how to provide a service or other dependency for your app, see [Providing Dependencies for an NgModule](providers "Providing Dependencies for an NgModule") * To learn how to create a singleton service to use in NgModules, see [Making a service a singleton](singleton-services "Making a service a singleton") Last reviewed on Mon Feb 28 2022 angular Template statements Template statements =================== Template statements are methods or properties that you can use in your HTML to respond to user events. With template statements, your application can engage users through actions such as displaying dynamic content or submitting forms. > See the Template syntax for the syntax and code snippets in this guide. > > In the following example, the template statement `deleteHero()` appears in quotes to the right of the equals sign `=` character as in `(event)="statement"`. ``` <button type="button" (click)="deleteHero()">Delete hero</button> ``` When the user clicks the **Delete hero** button, Angular calls the `deleteHero()` method in the component class. Use template statements with elements, components, or directives in response to events. > Responding to events is an aspect of Angular's [unidirectional data flow](glossary#unidirectional-data-flow). You can change anything in your application during a single event loop. > > Syntax ------ Like [template expressions](interpolation), template statements use a language that looks like JavaScript. However, the parser for template statements differs from the parser for template expressions. In addition, the template statements parser specifically supports both basic assignment (`=`) and chaining expressions with semicolons (`;`). The following JavaScript and template expression syntax is not allowed: * `new` * Increment and decrement operators, `++` and `--` * Operator assignment, such as `+=` and `-=` * The bitwise operators, such as `|` and `&` * The [pipe operator](pipes) Statement context ----------------- Statements have a context —a particular part of the application to which the statement belongs. Statements can refer only to what's in the statement context, which is typically the component instance. For example, `deleteHero()` of `(click)="deleteHero()"` is a method of the component in the following snippet. ``` <button type="button" (click)="deleteHero()">Delete hero</button> ``` The statement context may also refer to properties of the template's own context. In the following example, the component's event handling method, `onSave()` takes the template's own `$event` object as an argument. On the next two lines, the `deleteHero()` method takes a [template input variable](structural-directives#shorthand), `hero`, and `onSubmit()` takes a [template reference variable](template-reference-variables), `#heroForm`. ``` <button type="button" (click)="onSave($event)">Save</button> <button type="button" *ngFor="let hero of heroes" (click)="deleteHero(hero)">{{hero.name}}</button> <form #heroForm (ngSubmit)="onSubmit(heroForm)"> ... </form> ``` In this example, the context of the `$event` object, `hero`, and `#heroForm` is the template. Template context names take precedence over component context names. In the preceding `deleteHero(hero)`, the `hero` is the template input variable, not the component's `hero` property. Statement best practices ------------------------ | Practices | Details | | --- | --- | | Conciseness | Use method calls or basic property assignments to keep template statements minimal. | | Work within the context | The context of a template statement can be the component class instance or the template. Because of this, template statements cannot refer to anything in the global namespace such as `window` or `document`. For example, template statements can't call `console.log()` or `Math.max()`. | Last reviewed on Mon Feb 28 2022 angular Understanding dependency injection Understanding dependency injection ================================== Dependency injection, or DI, is one of the fundamental concepts in Angular. DI is wired into the Angular framework and allows classes with Angular decorators, such as Components, Directives, Pipes, and Injectables, to configure dependencies that they need. Two main roles exist in the DI system: dependency consumer and dependency provider. Angular facilitates the interaction between dependency consumers and dependency providers using an abstraction called [Injector](glossary#injector). When a dependency is requested, the injector checks its registry to see if there is an instance already available there. If not, a new instance is created and stored in the registry. Angular creates an application-wide injector (also known as "root" injector) during the application bootstrap process, as well as any other injectors as needed. In most cases you don't need to manually create injectors, but you should know that there is a layer that connects providers and consumers. This topic covers basic scenarios of how a class can act as a dependency. Angular also allows you to use functions, objects, primitive types such as string or Boolean, or any other types as dependencies. For more information, see [Dependency providers](dependency-injection-providers). Providing dependency -------------------- Imagine there is a class called HeroService that needs to act as a dependency in a component. The first step is to add the @Injectable decorator to show that the class can be injected. ``` @Injectable() class HeroService {} ``` The next step is to make it available in the DI by providing it. A dependency can be provided in multiple places: * At the Component level, using the `providers` field of the `@[Component](../api/core/component)` decorator. In this case the `HeroService` becomes available to all instances of this component and other components and directives used in the template. For example: ``` @Component({ selector: 'hero-list', template: '...', providers: [HeroService] }) class HeroListComponent {} ``` When you register a provider at the component level, you get a new instance of the service with each new instance of that component. * At the NgModule level, using the `providers` field of the `@[NgModule](../api/core/ngmodule)` decorator. In this scenario, the `HeroService` is available to all components, directives, and pipes declared in this NgModule. For example: ``` @NgModule({ declarations: [HeroListComponent] providers: [HeroService] }) class HeroListModule {} ``` When you register a provider with a specific NgModule, the same instance of a service is available to all components in that NgModule. To understand all edge-cases, see [Hierarchical injectors](hierarchical-dependency-injection). * At the application root level, which allows injecting it into other classes in the application. This can be done by adding the `providedIn: 'root'` field to the `@[Injectable](../api/core/injectable)` decorator: ``` @Injectable({ providedIn: 'root' }) class HeroService {} ``` When you provide the service at the root level, Angular creates a single, shared instance of the `HeroService` and injects it into any class that asks for it. Registering the provider in the `@[Injectable](../api/core/injectable)` metadata also allows Angular to optimize an app by removing the service from the compiled application if it isn't used, a process known as tree-shaking. Injecting a dependency ---------------------- The most common way to inject a dependency is to declare it in a class constructor. When Angular creates a new instance of a component, directive, or pipe class, it determines which services or other dependencies that class needs by looking at the constructor parameter types. For example, if the `HeroListComponent` needs the `HeroService`, the constructor can look like this: ``` @Component({ … }) class HeroListComponent { constructor(private service: HeroService) {} } ``` When Angular discovers that a component depends on a service, it first checks if the injector has any existing instances of that service. If a requested service instance doesn't yet exist, the injector creates one using the registered provider, and adds it to the injector before returning the service to Angular. When all requested services have been resolved and returned, Angular can call the component's constructor with those services as arguments. What's next ----------- * [Creating and injecting services](creating-injectable-service) * [Dependency Injection in Action](dependency-injection-in-action) Last reviewed on Tue Aug 02 2022 angular Introduction to modules Introduction to modules ======================= Angular applications are modular and Angular has its own modularity system called *NgModules*. NgModules are containers for a cohesive block of code dedicated to an application domain, a workflow, or a closely related set of capabilities. They can contain components, service providers, and other code files whose scope is defined by the containing NgModule. They can import functionality that is exported from other NgModules, and export selected functionality for use by other NgModules. Every Angular application has at least one NgModule class, [the *root module*](bootstrapping), which is conventionally named `AppModule` and resides in a file named `app.module.ts`. You launch your application by *bootstrapping* the root NgModule. While a small application might have only one NgModule, most applications have many more *feature modules*. The *root* NgModule for an application is so named because it can include child NgModules in a hierarchy of any depth. NgModule metadata ----------------- An NgModule is defined by a class decorated with `@[NgModule](../api/core/ngmodule)()`. The `@[NgModule](../api/core/ngmodule)()` decorator is a function that takes a single metadata object, whose properties describe the module. The most important properties are as follows. | Properties | Details | | --- | --- | | `declarations` | The [components](architecture-components), *directives*, and *pipes* that belong to this NgModule. | | `exports` | The subset of declarations that should be visible and usable in the *component templates* of other NgModules. | | `imports` | Other modules whose exported classes are needed by component templates declared in *this* NgModule. | | `providers` | Creators of [services](architecture-services) that this NgModule contributes to the global collection of services; they become accessible in all parts of the application. (You can also specify providers at the component level.) | | `bootstrap` | The main application view, called the *root component*, which hosts all other application views. Only the *root NgModule* should set the `bootstrap` property. | Here's a simple root NgModule definition. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; @NgModule({ imports: [ BrowserModule ], providers: [ Logger ], declarations: [ AppComponent ], exports: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` > `AppComponent` is included in the `exports` list here for illustration; it isn't actually necessary in this example. A root NgModule has no reason to *export* anything because other modules don't need to *import* the root NgModule. > > NgModules and components ------------------------ NgModules provide a *compilation context* for their components. A root NgModule always has a root component that is created during bootstrap but any NgModule can include any number of additional components, which can be loaded through the router or created through the template. The components that belong to an NgModule share a compilation context. A component and its template together define a *view*. A component can contain a *view hierarchy*, which allows you to define arbitrarily complex areas of the screen that can be created, modified, and destroyed as a unit. A view hierarchy can mix views defined in components that belong to different NgModules. This is often the case, especially for UI libraries. When you create a component, it's associated directly with a single view, called the *host view*. The host view can be the root of a view hierarchy, which can contain *embedded views*, which are in turn the host views of other components. Those components can be in the same NgModule, or can be imported from other NgModules. Views in the tree can be nested to any depth. > **NOTE**: The hierarchical structure of views is a key factor in the way Angular detects and responds to changes in the DOM and application data. > > NgModules and JavaScript modules -------------------------------- The NgModule system is different from, and unrelated to, the JavaScript (ES2015) module system for managing collections of JavaScript objects. These are *complementary* module systems that you can use together to write your applications. In JavaScript each *file* is a module and all objects defined in the file belong to that module. The module declares some objects to be public by marking them with the `export` key word. Other JavaScript modules use *import statements* to access public objects from other modules. ``` import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; ``` ``` export class AppModule { } ``` > [Learn more about the JavaScript module system on the web](https://exploringjs.com/es6/ch_modules.html). > > Angular libraries ----------------- Angular loads as a collection of JavaScript modules. You can think of them as library modules. Each Angular library name begins with the `@angular` prefix. Install them with the node package manager `npm` and import parts of them with JavaScript `import` statements. For example, import Angular's `[Component](../api/core/component)` decorator from the `@angular/core` library like this. ``` import { Component } from '@angular/core'; ``` You also import NgModules from Angular *libraries* using JavaScript import statements. For example, the following code imports the `[BrowserModule](../api/platform-browser/browsermodule)` NgModule from the `platform-browser` library. ``` import { BrowserModule } from '@angular/platform-browser'; ``` In the example of the simple root module above, the application module needs material from within `[BrowserModule](../api/platform-browser/browsermodule)`. To access that material, add it to the `@[NgModule](../api/core/ngmodule)` metadata `imports` like this. ``` imports: [ BrowserModule ], ``` In this way, you're using the Angular and JavaScript module systems *together*. Although it's easy to confuse the two systems, which share the common vocabulary of "imports" and "exports", you will become familiar with the different contexts in which they are used. > Learn more from the [NgModules](ngmodules) guide. > > Last reviewed on Mon Feb 28 2022
programming_docs
angular Event binding Event binding ============= Event binding lets you listen for and respond to user actions such as keystrokes, mouse movements, clicks, and touches. > See the live example for a working example containing the code snippets in this guide. > > Prerequisites ------------- * [Basics of components](architecture-components) * [Basics of templates](glossary#template) * [Binding syntax](binding-syntax) * [Template statements](template-statements) Binding to events ----------------- To bind to an event you use the Angular event binding syntax. This syntax consists of a target event name within parentheses to the left of an equal sign, and a quoted template statement to the right. Create the following example; the target event name is `click` and the template statement is `onSave()`. ``` <button (click)="onSave()">Save</button> ``` The event binding listens for the button's click events and calls the component's `onSave()` method whenever a click occurs. ### Determining an event target To determine an event target, Angular checks if the name of the target event matches an event property of a known directive. Create the following example: (Angular checks to see if `myClick` is an event on the custom `ClickDirective`) ``` <h4>myClick is an event on the custom ClickDirective:</h4> <button type="button" (myClick)="clickMessage=$event" clickable>click with myClick</button> {{clickMessage}} ``` If the target event name, `myClick` fails to match an output property of `ClickDirective`, Angular will instead bind to the `myClick` event on the underlying DOM element. Binding to passive events ------------------------- This is an advanced technique that is not necessary for most applications. You may find this useful if you want to optimize frequently occurring events that are causing performance problems. Angular also supports passive event listeners. For example, use the following steps to make a scroll event passive. 1. Create a file `zone-flags.ts` under `src` directory. 2. Add the following line into this file. ``` (window as any)['__zone_symbol__PASSIVE_EVENTS'] = ['scroll']; ``` 3. In the `src/polyfills.ts` file, before importing zone.js, import the newly created `zone-flags`. ``` import './zone-flags'; import 'zone.js'; // Included with Angular CLI. ``` After those steps, if you add event listeners for the `scroll` event, the listeners will be `passive`. Binding to keyboard events -------------------------- You can bind to keyboard events using Angular's binding syntax. You can specify the key or code that you would like to bind to keyboard events. They `key` and `code` fields are a native part of the browser keyboard event object. By default, event binding assumes you want to use the `key` field on the keyboard event. You can also use the `code` field. Combinations of keys can be separated by a `.` (period). For example, `keydown.enter` will allow you to bind events to the `enter` key. You can also use modifier keys, such as `shift`, `alt`, `control`, and the `command` keys from Mac. The following example shows how to bind a keyboard event to `keydown.shift.t`. ``` <input (keydown.shift.t)="onKeydown($event)" /> ``` Depending on the operating system, some key combinations might create special characters instead of the key combination that you expect. MacOS, for example, creates special characters when you use the option and shift keys together. If you bind to `keydown.shift.alt.t`, on macOS, that combination produces a `ˇ` character instead of a `t`, which doesn't match the binding and won't trigger your event handler. To bind to `keydown.shift.alt.t` on macOS, use the `code` keyboard event field to get the correct behavior, such as `keydown.code.shiftleft.altleft.keyt` shown in this example. ``` <input (keydown.code.shiftleft.altleft.keyt)="onKeydown($event)" /> ``` The `code` field is more specific than the `key` field. The `key` field always reports `shift`, whereas the `code` field will specify `leftshift` or `rightshift`. When using the `code` field, you might need to add separate bindings to catch all the behaviors you want. Using the `code` field avoids the need to handle OS specific behaviors such as the `shift + option` behavior on macOS. For more information, visit the full reference for [key](https://developer.mozilla.org/en-US/docs/Web/API/UI_Events/Keyboard_event_key_values) and [code](https://developer.mozilla.org/en-US/docs/Web/API/UI_Events/Keyboard_event_code_values) to help build out your event strings. What's next ----------- * For more information on how event binding works, see [How event binding works](event-binding-concepts). * [Property binding](property-binding) * [Text interpolation](interpolation) * [Two-way binding](two-way-binding) Last reviewed on Tue May 10 2022 angular Component testing scenarios Component testing scenarios =========================== This guide explores common component testing use cases. > If you'd like to experiment with the application that this guide describes, run it in your browser or download and run it locally. > > Component binding ----------------- In the example application, the `BannerComponent` presents static title text in the HTML template. After a few changes, the `BannerComponent` presents a dynamic title by binding to the component's `title` property like this. ``` @Component({ selector: 'app-banner', template: '<h1>{{title}}</h1>', styles: ['h1 { color: green; font-size: 350%}'] }) export class BannerComponent { title = 'Test Tour of Heroes'; } ``` As minimal as this is, you decide to add a test to confirm that component actually displays the right content where you think it should. #### Query for the `<h1>` You'll write a sequence of tests that inspect the value of the `<h1>` element that wraps the *title* property interpolation binding. You update the `beforeEach` to find that element with a standard HTML `querySelector` and assign it to the `h1` variable. ``` let component: BannerComponent; let fixture: ComponentFixture<BannerComponent>; let h1: HTMLElement; beforeEach(() => { TestBed.configureTestingModule({ declarations: [ BannerComponent ], }); fixture = TestBed.createComponent(BannerComponent); component = fixture.componentInstance; // BannerComponent test instance h1 = fixture.nativeElement.querySelector('h1'); }); ``` #### `[createComponent](../api/core/createcomponent)()` does not bind data For your first test you'd like to see that the screen displays the default `title`. Your instinct is to write a test that immediately inspects the `<h1>` like this: ``` it('should display original title', () => { expect(h1.textContent).toContain(component.title); }); ``` *That test fails* with the message: ``` expected '' to contain 'Test Tour of Heroes'. ``` Binding happens when Angular performs **change detection**. In production, change detection kicks in automatically when Angular creates a component or the user enters a keystroke or an asynchronous activity (for example, AJAX) completes. The `TestBed.createComponent` does *not* trigger change detection; a fact confirmed in the revised test: ``` it('no title in the DOM after createComponent()', () => { expect(h1.textContent).toEqual(''); }); ``` #### `detectChanges()` You must tell the `[TestBed](../api/core/testing/testbed)` to perform data binding by calling `fixture.detectChanges()`. Only then does the `<h1>` have the expected title. ``` it('should display original title after detectChanges()', () => { fixture.detectChanges(); expect(h1.textContent).toContain(component.title); }); ``` Delayed change detection is intentional and useful. It gives the tester an opportunity to inspect and change the state of the component *before Angular initiates data binding and calls [lifecycle hooks](lifecycle-hooks)*. Here's another test that changes the component's `title` property *before* calling `fixture.detectChanges()`. ``` it('should display a different test title', () => { component.title = 'Test Title'; fixture.detectChanges(); expect(h1.textContent).toContain('Test Title'); }); ``` #### Automatic change detection The `BannerComponent` tests frequently call `detectChanges`. Some testers prefer that the Angular test environment run change detection automatically. That's possible by configuring the `[TestBed](../api/core/testing/testbed)` with the `[ComponentFixtureAutoDetect](../api/core/testing/componentfixtureautodetect)` provider. First import it from the testing utility library: ``` import { ComponentFixtureAutoDetect } from '@angular/core/testing'; ``` Then add it to the `providers` array of the testing module configuration: ``` TestBed.configureTestingModule({ declarations: [ BannerComponent ], providers: [ { provide: ComponentFixtureAutoDetect, useValue: true } ] }); ``` Here are three tests that illustrate how automatic change detection works. ``` it('should display original title', () => { // Hooray! No `fixture.detectChanges()` needed expect(h1.textContent).toContain(comp.title); }); it('should still see original title after comp.title change', () => { const oldTitle = comp.title; comp.title = 'Test Title'; // Displayed title is old because Angular didn't hear the change :( expect(h1.textContent).toContain(oldTitle); }); it('should display updated title after detectChanges', () => { comp.title = 'Test Title'; fixture.detectChanges(); // detect changes explicitly expect(h1.textContent).toContain(comp.title); }); ``` The first test shows the benefit of automatic change detection. The second and third test reveal an important limitation. The Angular testing environment does *not* know that the test changed the component's `title`. The `[ComponentFixtureAutoDetect](../api/core/testing/componentfixtureautodetect)` service responds to *asynchronous activities* such as promise resolution, timers, and DOM events. But a direct, synchronous update of the component property is invisible. The test must call `fixture.detectChanges()` manually to trigger another cycle of change detection. > Rather than wonder when the test fixture will or won't perform change detection, the samples in this guide *always call* `detectChanges()` *explicitly*. There is no harm in calling `detectChanges()` more often than is strictly necessary. > > #### Change an input value with `dispatchEvent()` To simulate user input, find the input element and set its `value` property. You will call `fixture.detectChanges()` to trigger Angular's change detection. But there is an essential, intermediate step. Angular doesn't know that you set the input element's `value` property. It won't read that property until you raise the element's `input` event by calling `dispatchEvent()`. *Then* you call `detectChanges()`. The following example demonstrates the proper sequence. ``` it('should convert hero name to Title Case', () => { // get the name's input and display elements from the DOM const hostElement: HTMLElement = fixture.nativeElement; const nameInput: HTMLInputElement = hostElement.querySelector('input')!; const nameDisplay: HTMLElement = hostElement.querySelector('span')!; // simulate user entering a new name into the input box nameInput.value = 'quick BROWN fOx'; // Dispatch a DOM event so that Angular learns of input value change. nameInput.dispatchEvent(new Event('input')); // Tell Angular to update the display binding through the title pipe fixture.detectChanges(); expect(nameDisplay.textContent).toBe('Quick Brown Fox'); }); ``` Component with external files ----------------------------- The preceding `BannerComponent` is defined with an *inline template* and *inline css*, specified in the `@[Component.template](../api/core/component#template)` and `@[Component.styles](../api/core/component#styles)` properties respectively. Many components specify *external templates* and *external css* with the `@[Component.templateUrl](../api/core/component#templateUrl)` and `@[Component.styleUrls](../api/core/component#styleUrls)` properties respectively, as the following variant of `BannerComponent` does. ``` @Component({ selector: 'app-banner', templateUrl: './banner-external.component.html', styleUrls: ['./banner-external.component.css'] }) ``` This syntax tells the Angular compiler to read the external files during component compilation. That's not a problem when you run the CLI `ng test` command because it *compiles the application before running the tests*. However, if you run the tests in a **non-CLI environment**, tests of this component might fail. For example, if you run the `BannerComponent` tests in a web coding environment such as [plunker](https://plnkr.co), you'll see a message like this one: ``` Error: This test module uses the component BannerComponent which is using a "templateUrl" or "styleUrls", but they were never compiled. Please call "TestBed.compileComponents" before your test. ``` You get this test failure message when the runtime environment compiles the source code *during the tests themselves*. To correct the problem, call `compileComponents()` as explained in the following [Calling compileComponents](testing-components-scenarios#compile-components) section. Component with a dependency --------------------------- Components often have service dependencies. The `WelcomeComponent` displays a welcome message to the logged-in user. It knows who the user is based on a property of the injected `UserService`: ``` import { Component, OnInit } from '@angular/core'; import { UserService } from '../model/user.service'; @Component({ selector: 'app-welcome', template: '<h3 class="welcome"><i>{{welcome}}</i></h3>' }) export class WelcomeComponent implements OnInit { welcome = ''; constructor(private userService: UserService) { } ngOnInit(): void { this.welcome = this.userService.isLoggedIn ? 'Welcome, ' + this.userService.user.name : 'Please log in.'; } } ``` The `WelcomeComponent` has decision logic that interacts with the service, logic that makes this component worth testing. Here's the testing module configuration for the spec file: ``` TestBed.configureTestingModule({ declarations: [ WelcomeComponent ], // providers: [ UserService ], // NO! Don't provide the real service! // Provide a test-double instead providers: [ { provide: UserService, useValue: userServiceStub } ], }); ``` This time, in addition to declaring the *component-under-test*, the configuration adds a `UserService` provider to the `providers` list. But not the real `UserService`. #### Provide service test doubles A *component-under-test* doesn't have to be injected with real services. In fact, it is usually better if they are test doubles such as, stubs, fakes, spies, or mocks. The purpose of the spec is to test the component, not the service, and real services can be trouble. Injecting the real `UserService` could be a nightmare. The real service might ask the user for login credentials and attempt to reach an authentication server. These behaviors can be hard to intercept. It is far easier and safer to create and register a test double in place of the real `UserService`. This particular test suite supplies a minimal mock of the `UserService` that satisfies the needs of the `WelcomeComponent` and its tests: ``` let userServiceStub: Partial<UserService>; userServiceStub = { isLoggedIn: true, user: { name: 'Test User' }, }; ``` #### Get injected services The tests need access to the stub `UserService` injected into the `WelcomeComponent`. Angular has a hierarchical injection system. There can be injectors at multiple levels, from the root injector created by the `[TestBed](../api/core/testing/testbed)` down through the component tree. The safest way to get the injected service, the way that ***always works***, is to **get it from the injector of the *component-under-test***. The component injector is a property of the fixture's `[DebugElement](../api/core/debugelement)`. ``` // UserService actually injected into the component userService = fixture.debugElement.injector.get(UserService); ``` #### `TestBed.inject()` You *might* also be able to get the service from the root injector using `TestBed.inject()`. This is easier to remember and less verbose. But it only works when Angular injects the component with the service instance in the test's root injector. In this test suite, the *only* provider of `UserService` is the root testing module, so it is safe to call `TestBed.inject()` as follows: ``` // UserService from the root injector userService = TestBed.inject(UserService); ``` > For a use case in which `TestBed.inject()` does not work, see the [*Override component providers*](testing-components-scenarios#component-override) section that explains when and why you must get the service from the component's injector instead. > > #### Final setup and tests Here's the complete `beforeEach()`, using `TestBed.inject()`: ``` let userServiceStub: Partial<UserService>; beforeEach(() => { // stub UserService for test purposes userServiceStub = { isLoggedIn: true, user: { name: 'Test User' }, }; TestBed.configureTestingModule({ declarations: [ WelcomeComponent ], providers: [ { provide: UserService, useValue: userServiceStub } ], }); fixture = TestBed.createComponent(WelcomeComponent); comp = fixture.componentInstance; // UserService from the root injector userService = TestBed.inject(UserService); // get the "welcome" element by CSS selector (e.g., by class name) el = fixture.nativeElement.querySelector('.welcome'); }); ``` And here are some tests: ``` it('should welcome the user', () => { fixture.detectChanges(); const content = el.textContent; expect(content) .withContext('"Welcome ..."') .toContain('Welcome'); expect(content) .withContext('expected name') .toContain('Test User'); }); it('should welcome "Bubba"', () => { userService.user.name = 'Bubba'; // welcome message hasn't been shown yet fixture.detectChanges(); expect(el.textContent).toContain('Bubba'); }); it('should request login if not logged in', () => { userService.isLoggedIn = false; // welcome message hasn't been shown yet fixture.detectChanges(); const content = el.textContent; expect(content) .withContext('not welcomed') .not.toContain('Welcome'); expect(content) .withContext('"log in"') .toMatch(/log in/i); }); ``` The first is a sanity test; it confirms that the stubbed `UserService` is called and working. > The second parameter to the Jasmine matcher (for example, `'expected name'`) is an optional failure label. If the expectation fails, Jasmine appends this label to the expectation failure message. In a spec with multiple expectations, it can help clarify what went wrong and which expectation failed. > > The remaining tests confirm the logic of the component when the service returns different values. The second test validates the effect of changing the user name. The third test checks that the component displays the proper message when there is no logged-in user. Component with async service ---------------------------- In this sample, the `AboutComponent` template hosts a `TwainComponent`. The `TwainComponent` displays Mark Twain quotes. ``` template: ` <p class="twain"><i>{{quote | async}}</i></p> <button type="button" (click)="getQuote()">Next quote</button> <p class="error" *ngIf="errorMessage">{{ errorMessage }}</p>`, ``` > **NOTE**: The value of the component's `quote` property passes through an `[AsyncPipe](../api/common/asyncpipe)`. That means the property returns either a `Promise` or an `Observable`. > > In this example, the `TwainComponent.getQuote()` method tells you that the `quote` property returns an `Observable`. ``` getQuote() { this.errorMessage = ''; this.quote = this.twainService.getQuote().pipe( startWith('...'), catchError( (err: any) => { // Wait a turn because errorMessage already set once this turn setTimeout(() => this.errorMessage = err.message || err.toString()); return of('...'); // reset message to placeholder }) ); ``` The `TwainComponent` gets quotes from an injected `TwainService`. The component starts the returned `Observable` with a placeholder value (`'...'`), before the service can return its first quote. The `catchError` intercepts service errors, prepares an error message, and returns the placeholder value on the success channel. It must wait a tick to set the `errorMessage` in order to avoid updating that message twice in the same change detection cycle. These are all features you'll want to test. #### Testing with a spy When testing a component, only the service's public API should matter. In general, tests themselves should not make calls to remote servers. They should emulate such calls. The setup in this `app/twain/twain.component.spec.ts` shows one way to do that: ``` beforeEach(() => { testQuote = 'Test Quote'; // Create a fake TwainService object with a `getQuote()` spy const twainService = jasmine.createSpyObj('TwainService', ['getQuote']); // Make the spy return a synchronous Observable with the test data getQuoteSpy = twainService.getQuote.and.returnValue(of(testQuote)); TestBed.configureTestingModule({ declarations: [TwainComponent], providers: [{provide: TwainService, useValue: twainService}] }); fixture = TestBed.createComponent(TwainComponent); component = fixture.componentInstance; quoteEl = fixture.nativeElement.querySelector('.twain'); }); ``` Focus on the spy. ``` // Create a fake TwainService object with a `getQuote()` spy const twainService = jasmine.createSpyObj('TwainService', ['getQuote']); // Make the spy return a synchronous Observable with the test data getQuoteSpy = twainService.getQuote.and.returnValue(of(testQuote)); ``` The spy is designed such that any call to `getQuote` receives an observable with a test quote. Unlike the real `getQuote()` method, this spy bypasses the server and returns a synchronous observable whose value is available immediately. You can write many useful tests with this spy, even though its `Observable` is synchronous. #### Synchronous tests A key advantage of a synchronous `Observable` is that you can often turn asynchronous processes into synchronous tests. ``` it('should show quote after component initialized', () => { fixture.detectChanges(); // onInit() // sync spy result shows testQuote immediately after init expect(quoteEl.textContent).toBe(testQuote); expect(getQuoteSpy.calls.any()) .withContext('getQuote called') .toBe(true); }); ``` Because the spy result returns synchronously, the `getQuote()` method updates the message on screen immediately *after* the first change detection cycle during which Angular calls `ngOnInit`. You're not so lucky when testing the error path. Although the service spy will return an error synchronously, the component method calls `setTimeout()`. The test must wait at least one full turn of the JavaScript engine before the value becomes available. The test must become *asynchronous*. #### Async test with `[fakeAsync](../api/core/testing/fakeasync)()` To use `[fakeAsync](../api/core/testing/fakeasync)()` functionality, you must import `zone.js/testing` in your test setup file. If you created your project with the Angular CLI, `zone-testing` is already imported in `src/test.ts`. The following test confirms the expected behavior when the service returns an `ErrorObservable`. ``` it('should display error when TwainService fails', fakeAsync(() => { // tell spy to return an error observable getQuoteSpy.and.returnValue(throwError(() => new Error('TwainService test failure'))); fixture.detectChanges(); // onInit() // sync spy errors immediately after init tick(); // flush the component's setTimeout() fixture.detectChanges(); // update errorMessage within setTimeout() expect(errorMessage()) .withContext('should display error') .toMatch(/test failure/, ); expect(quoteEl.textContent) .withContext('should show placeholder') .toBe('...'); })); ``` > **NOTE**: The `it()` function receives an argument of the following form. > > ``` fakeAsync(() => { /* test body */ }) ``` The `[fakeAsync](../api/core/testing/fakeasync)()` function enables a linear coding style by running the test body in a special `[fakeAsync](../api/core/testing/fakeasync) test zone`. The test body appears to be synchronous. There is no nested syntax (like a `Promise.then()`) to disrupt the flow of control. > Limitation: The `[fakeAsync](../api/core/testing/fakeasync)()` function won't work if the test body makes an `XMLHttpRequest` (XHR) call. XHR calls within a test are rare, but if you need to call XHR, see the [`waitForAsync()`](testing-components-scenarios#waitForAsync) section. > > #### The `[tick](../api/core/testing/tick)()` function You do have to call [tick()](../api/core/testing/tick) to advance the virtual clock. Calling [tick()](../api/core/testing/tick) simulates the passage of time until all pending asynchronous activities finish. In this case, it waits for the error handler's `setTimeout()`. The [tick()](../api/core/testing/tick) function accepts `millis` and `tickOptions` as parameters. The `millis` parameter specifies how much the virtual clock advances and defaults to `0` if not provided. For example, if you have a `setTimeout(fn, 100)` in a `[fakeAsync](../api/core/testing/fakeasync)()` test, you need to use `[tick](../api/core/testing/tick)(100)` to trigger the fn callback. The optional `tickOptions` parameter has a property named `processNewMacroTasksSynchronously`. The `processNewMacroTasksSynchronously` property represents whether to invoke new generated macro tasks when ticking and defaults to `true`. ``` it('should run timeout callback with delay after call tick with millis', fakeAsync(() => { let called = false; setTimeout(() => { called = true; }, 100); tick(100); expect(called).toBe(true); })); ``` The [tick()](../api/core/testing/tick) function is one of the Angular testing utilities that you import with `[TestBed](../api/core/testing/testbed)`. It's a companion to `[fakeAsync](../api/core/testing/fakeasync)()` and you can only call it within a `[fakeAsync](../api/core/testing/fakeasync)()` body. #### tickOptions In this example, you have a new macro task, the nested `setTimeout` function. By default, when the `[tick](../api/core/testing/tick)` is setTimeout, `outside` and `nested` will both be triggered. ``` it('should run new macro task callback with delay after call tick with millis', fakeAsync(() => { function nestedTimer(cb: () => any): void { setTimeout(() => setTimeout(() => cb())); } const callback = jasmine.createSpy('callback'); nestedTimer(callback); expect(callback).not.toHaveBeenCalled(); tick(0); // the nested timeout will also be triggered expect(callback).toHaveBeenCalled(); })); ``` In some case, you don't want to trigger the new macro task when ticking. You can use `[tick](../api/core/testing/tick)(millis, {processNewMacroTasksSynchronously: false})` to not invoke a new macro task. ``` it('should not run new macro task callback with delay after call tick with millis', fakeAsync(() => { function nestedTimer(cb: () => any): void { setTimeout(() => setTimeout(() => cb())); } const callback = jasmine.createSpy('callback'); nestedTimer(callback); expect(callback).not.toHaveBeenCalled(); tick(0, {processNewMacroTasksSynchronously: false}); // the nested timeout will not be triggered expect(callback).not.toHaveBeenCalled(); tick(0); expect(callback).toHaveBeenCalled(); })); ``` #### Comparing dates inside fakeAsync() `[fakeAsync](../api/core/testing/fakeasync)()` simulates passage of time, which lets you calculate the difference between dates inside `[fakeAsync](../api/core/testing/fakeasync)()`. ``` it('should get Date diff correctly in fakeAsync', fakeAsync(() => { const start = Date.now(); tick(100); const end = Date.now(); expect(end - start).toBe(100); })); ``` #### jasmine.clock with fakeAsync() Jasmine also provides a `clock` feature to mock dates. Angular automatically runs tests that are run after `jasmine.clock().install()` is called inside a `[fakeAsync](../api/core/testing/fakeasync)()` method until `jasmine.clock().uninstall()` is called. `[fakeAsync](../api/core/testing/fakeasync)()` is not needed and throws an error if nested. By default, this feature is disabled. To enable it, set a global flag before importing `zone-testing`. If you use the Angular CLI, configure this flag in `src/test.ts`. ``` (window as any)['__zone_symbol__fakeAsyncPatchLock'] = true; import 'zone.js/testing'; ``` ``` describe('use jasmine.clock()', () => { // need to config __zone_symbol__fakeAsyncPatchLock flag // before loading zone.js/testing beforeEach(() => { jasmine.clock().install(); }); afterEach(() => { jasmine.clock().uninstall(); }); it('should auto enter fakeAsync', () => { // is in fakeAsync now, don't need to call fakeAsync(testFn) let called = false; setTimeout(() => { called = true; }, 100); jasmine.clock().tick(100); expect(called).toBe(true); }); }); ``` #### Using the RxJS scheduler inside fakeAsync() You can also use RxJS scheduler in `[fakeAsync](../api/core/testing/fakeasync)()` just like using `setTimeout()` or `setInterval()`, but you need to import `zone.js/plugins/zone-patch-rxjs-fake-async` to patch RxJS scheduler. ``` it('should get Date diff correctly in fakeAsync with rxjs scheduler', fakeAsync(() => { // need to add `import 'zone.js/plugins/zone-patch-rxjs-fake-async' // to patch rxjs scheduler let result = ''; of('hello').pipe(delay(1000)).subscribe(v => { result = v; }); expect(result).toBe(''); tick(1000); expect(result).toBe('hello'); const start = new Date().getTime(); let dateDiff = 0; interval(1000).pipe(take(2)).subscribe(() => dateDiff = (new Date().getTime() - start)); tick(1000); expect(dateDiff).toBe(1000); tick(1000); expect(dateDiff).toBe(2000); })); ``` #### Support more macroTasks By default, `[fakeAsync](../api/core/testing/fakeasync)()` supports the following macro tasks. * `setTimeout` * `setInterval` * `requestAnimationFrame` * `webkitRequestAnimationFrame` * `mozRequestAnimationFrame` If you run other macro tasks such as `HTMLCanvasElement.toBlob()`, an *"Unknown macroTask scheduled in fake async test"* error is thrown. ``` import { fakeAsync, TestBed, tick } from '@angular/core/testing'; import { CanvasComponent } from './canvas.component'; describe('CanvasComponent', () => { beforeEach(async () => { await TestBed .configureTestingModule({ declarations: [CanvasComponent], }) .compileComponents(); }); it('should be able to generate blob data from canvas', fakeAsync(() => { const fixture = TestBed.createComponent(CanvasComponent); const canvasComp = fixture.componentInstance; fixture.detectChanges(); expect(canvasComp.blobSize).toBe(0); tick(); expect(canvasComp.blobSize).toBeGreaterThan(0); })); }); ``` ``` import { Component, AfterViewInit, ViewChild, ElementRef } from '@angular/core'; @Component({ selector: 'sample-canvas', template: '<canvas #sampleCanvas width="200" height="200"></canvas>', }) export class CanvasComponent implements AfterViewInit { blobSize = 0; @ViewChild('sampleCanvas') sampleCanvas!: ElementRef; ngAfterViewInit() { const canvas: HTMLCanvasElement = this.sampleCanvas.nativeElement; const context = canvas.getContext('2d')!; context.clearRect(0, 0, 200, 200); context.fillStyle = '#FF1122'; context.fillRect(0, 0, 200, 200); canvas.toBlob(blob => { this.blobSize = blob?.size ?? 0; }); } } ``` If you want to support such a case, you need to define the macro task you want to support in `beforeEach()`. For example: ``` beforeEach(() => { (window as any).__zone_symbol__FakeAsyncTestMacroTask = [ { source: 'HTMLCanvasElement.toBlob', callbackArgs: [{size: 200}], }, ]; }); ``` > **NOTE**: In order to make the `<canvas>` element Zone.js-aware in your app, you need to import the `zone-patch-canvas` patch (either in `polyfills.ts` or in the specific file that uses `<canvas>`): > > ``` // Import patch to make async `HTMLCanvasElement` methods (such as `.toBlob()`) Zone.js-aware. // Either import in `polyfills.ts` (if used in more than one places in the app) or in the component // file using `HTMLCanvasElement` (if it is only used in a single file). import 'zone.js/plugins/zone-patch-canvas'; ``` #### Async observables You might be satisfied with the test coverage of these tests. However, you might be troubled by the fact that the real service doesn't quite behave this way. The real service sends requests to a remote server. A server takes time to respond and the response certainly won't be available immediately as in the previous two tests. Your tests will reflect the real world more faithfully if you return an *asynchronous* observable from the `getQuote()` spy like this. ``` // Simulate delayed observable values with the `asyncData()` helper getQuoteSpy.and.returnValue(asyncData(testQuote)); ``` #### Async observable helpers The async observable was produced by an `asyncData` helper. The `asyncData` helper is a utility function that you'll have to write yourself, or copy this one from the sample code. ``` /** * Create async observable that emits-once and completes * after a JS engine turn */ export function asyncData<T>(data: T) { return defer(() => Promise.resolve(data)); } ``` This helper's observable emits the `data` value in the next turn of the JavaScript engine. The [RxJS `defer()` operator](http://reactivex.io/documentation/operators/defer.html) returns an observable. It takes a factory function that returns either a promise or an observable. When something subscribes to *defer*'s observable, it adds the subscriber to a new observable created with that factory. The `defer()` operator transforms the `Promise.resolve()` into a new observable that, like `[HttpClient](../api/common/http/httpclient)`, emits once and completes. Subscribers are unsubscribed after they receive the data value. There's a similar helper for producing an async error. ``` /** * Create async observable error that errors * after a JS engine turn */ export function asyncError<T>(errorObject: any) { return defer(() => Promise.reject(errorObject)); } ``` #### More async tests Now that the `getQuote()` spy is returning async observables, most of your tests will have to be async as well. Here's a `[fakeAsync](../api/core/testing/fakeasync)()` test that demonstrates the data flow you'd expect in the real world. ``` it('should show quote after getQuote (fakeAsync)', fakeAsync(() => { fixture.detectChanges(); // ngOnInit() expect(quoteEl.textContent) .withContext('should show placeholder') .toBe('...'); tick(); // flush the observable to get the quote fixture.detectChanges(); // update view expect(quoteEl.textContent) .withContext('should show quote') .toBe(testQuote); expect(errorMessage()) .withContext('should not show error') .toBeNull(); })); ``` Notice that the quote element displays the placeholder value (`'...'`) after `ngOnInit()`. The first quote hasn't arrived yet. To flush the first quote from the observable, you call [tick()](../api/core/testing/tick). Then call `detectChanges()` to tell Angular to update the screen. Then you can assert that the quote element displays the expected text. #### Async test with `[waitForAsync](../api/core/testing/waitforasync)()` To use `[waitForAsync](../api/core/testing/waitforasync)()` functionality, you must import `zone.js/testing` in your test setup file. If you created your project with the Angular CLI, `zone-testing` is already imported in `src/test.ts`. Here's the previous `[fakeAsync](../api/core/testing/fakeasync)()` test, re-written with the `[waitForAsync](../api/core/testing/waitforasync)()` utility. ``` it('should show quote after getQuote (waitForAsync)', waitForAsync(() => { fixture.detectChanges(); // ngOnInit() expect(quoteEl.textContent) .withContext('should show placeholder') .toBe('...'); fixture.whenStable().then(() => { // wait for async getQuote fixture.detectChanges(); // update view with quote expect(quoteEl.textContent).toBe(testQuote); expect(errorMessage()) .withContext('should not show error') .toBeNull(); }); })); ``` The `[waitForAsync](../api/core/testing/waitforasync)()` utility hides some asynchronous boilerplate by arranging for the tester's code to run in a special *async test zone*. You don't need to pass Jasmine's `done()` into the test and call `done()` because it is `undefined` in promise or observable callbacks. But the test's asynchronous nature is revealed by the call to `fixture.whenStable()`, which breaks the linear flow of control. When using an `intervalTimer()` such as `setInterval()` in `[waitForAsync](../api/core/testing/waitforasync)()`, remember to cancel the timer with `clearInterval()` after the test, otherwise the `[waitForAsync](../api/core/testing/waitforasync)()` never ends. #### `whenStable` The test must wait for the `getQuote()` observable to emit the next quote. Instead of calling [tick()](../api/core/testing/tick), it calls `fixture.whenStable()`. The `fixture.whenStable()` returns a promise that resolves when the JavaScript engine's task queue becomes empty. In this example, the task queue becomes empty when the observable emits the first quote. The test resumes within the promise callback, which calls `detectChanges()` to update the quote element with the expected text. #### Jasmine `done()` While the `[waitForAsync](../api/core/testing/waitforasync)()` and `[fakeAsync](../api/core/testing/fakeasync)()` functions greatly simplify Angular asynchronous testing, you can still fall back to the traditional technique and pass `it` a function that takes a [`done` callback](https://jasmine.github.io/2.0/introduction.html#section-Asynchronous_Support). You can't call `done()` in `[waitForAsync](../api/core/testing/waitforasync)()` or `[fakeAsync](../api/core/testing/fakeasync)()` functions, because the `done parameter` is `undefined`. Now you are responsible for chaining promises, handling errors, and calling `done()` at the appropriate moments. Writing test functions with `done()`, is more cumbersome than `[waitForAsync](../api/core/testing/waitforasync)()`and `[fakeAsync](../api/core/testing/fakeasync)()`, but it is occasionally necessary when code involves the `intervalTimer()` like `setInterval`. Here are two more versions of the previous test, written with `done()`. The first one subscribes to the `Observable` exposed to the template by the component's `quote` property. ``` it('should show last quote (quote done)', (done: DoneFn) => { fixture.detectChanges(); component.quote.pipe(last()).subscribe(() => { fixture.detectChanges(); // update view with quote expect(quoteEl.textContent).toBe(testQuote); expect(errorMessage()) .withContext('should not show error') .toBeNull(); done(); }); }); ``` The RxJS `last()` operator emits the observable's last value before completing, which will be the test quote. The `subscribe` callback calls `detectChanges()` to update the quote element with the test quote, in the same manner as the earlier tests. In some tests, you're more interested in how an injected service method was called and what values it returned, than what appears on screen. A service spy, such as the `qetQuote()` spy of the fake `TwainService`, can give you that information and make assertions about the state of the view. ``` it('should show quote after getQuote (spy done)', (done: DoneFn) => { fixture.detectChanges(); // the spy's most recent call returns the observable with the test quote getQuoteSpy.calls.mostRecent().returnValue.subscribe(() => { fixture.detectChanges(); // update view with quote expect(quoteEl.textContent).toBe(testQuote); expect(errorMessage()) .withContext('should not show error') .toBeNull(); done(); }); }); ``` Component marble tests ---------------------- The previous `TwainComponent` tests simulated an asynchronous observable response from the `TwainService` with the `asyncData` and `asyncError` utilities. These are short, simple functions that you can write yourself. Unfortunately, they're too simple for many common scenarios. An observable often emits multiple times, perhaps after a significant delay. A component might coordinate multiple observables with overlapping sequences of values and errors. **RxJS marble testing** is a great way to test observable scenarios, both simple and complex. You've likely seen the [marble diagrams](https://rxmarbles.com) that illustrate how observables work. Marble testing uses a similar marble language to specify the observable streams and expectations in your tests. The following examples revisit two of the `TwainComponent` tests with marble testing. Start by installing the `jasmine-marbles` npm package. Then import the symbols you need. ``` import { cold, getTestScheduler } from 'jasmine-marbles'; ``` Here's the complete test for getting a quote: ``` it('should show quote after getQuote (marbles)', () => { // observable test quote value and complete(), after delay const q$ = cold('---x|', { x: testQuote }); getQuoteSpy.and.returnValue( q$ ); fixture.detectChanges(); // ngOnInit() expect(quoteEl.textContent) .withContext('should show placeholder') .toBe('...'); getTestScheduler().flush(); // flush the observables fixture.detectChanges(); // update view expect(quoteEl.textContent) .withContext('should show quote') .toBe(testQuote); expect(errorMessage()) .withContext('should not show error') .toBeNull(); }); ``` Notice that the Jasmine test is synchronous. There's no `[fakeAsync](../api/core/testing/fakeasync)()`. Marble testing uses a test scheduler to simulate the passage of time in a synchronous test. The beauty of marble testing is in the visual definition of the observable streams. This test defines a [*cold* observable](testing-components-scenarios#cold-observable) that waits three [frames](testing-components-scenarios#marble-frame) (`---`), emits a value (`x`), and completes (`|`). In the second argument you map the value marker (`x`) to the emitted value (`testQuote`). ``` const q$ = cold('---x|', { x: testQuote }); ``` The marble library constructs the corresponding observable, which the test sets as the `getQuote` spy's return value. When you're ready to activate the marble observables, you tell the `TestScheduler` to *flush* its queue of prepared tasks like this. ``` getTestScheduler().flush(); // flush the observables ``` This step serves a purpose analogous to [tick()](../api/core/testing/tick) and `whenStable()` in the earlier `[fakeAsync](../api/core/testing/fakeasync)()` and `[waitForAsync](../api/core/testing/waitforasync)()` examples. The balance of the test is the same as those examples. #### Marble error testing Here's the marble testing version of the `getQuote()` error test. ``` it('should display error when TwainService fails', fakeAsync(() => { // observable error after delay const q$ = cold('---#|', null, new Error('TwainService test failure')); getQuoteSpy.and.returnValue( q$ ); fixture.detectChanges(); // ngOnInit() expect(quoteEl.textContent) .withContext('should show placeholder') .toBe('...'); getTestScheduler().flush(); // flush the observables tick(); // component shows error after a setTimeout() fixture.detectChanges(); // update error message expect(errorMessage()) .withContext('should display error') .toMatch(/test failure/); expect(quoteEl.textContent) .withContext('should show placeholder') .toBe('...'); })); ``` It's still an async test, calling `[fakeAsync](../api/core/testing/fakeasync)()` and [tick()](../api/core/testing/tick), because the component itself calls `setTimeout()` when processing errors. Look at the marble observable definition. ``` const q$ = cold('---#|', null, new Error('TwainService test failure')); ``` This is a *cold* observable that waits three frames and then emits an error, the hash (`#`) character indicates the timing of the error that is specified in the third argument. The second argument is null because the observable never emits a value. #### Learn about marble testing A *marble frame* is a virtual unit of testing time. Each symbol (`-`, `x`, `|`, `#`) marks the passing of one frame. A *cold* observable doesn't produce values until you subscribe to it. Most of your application observables are cold. All [*HttpClient*](http) methods return cold observables. A *hot* observable is already producing values *before* you subscribe to it. The [`Router.events`](../api/router/router#events) observable, which reports router activity, is a *hot* observable. RxJS marble testing is a rich subject, beyond the scope of this guide. Learn about it on the web, starting with the [official documentation](https://rxjs.dev/guide/testing/marble-testing). Component with inputs and outputs --------------------------------- A component with inputs and outputs typically appears inside the view template of a host component. The host uses a property binding to set the input property and an event binding to listen to events raised by the output property. The testing goal is to verify that such bindings work as expected. The tests should set input values and listen for output events. The `DashboardHeroComponent` is a tiny example of a component in this role. It displays an individual hero provided by the `DashboardComponent`. Clicking that hero tells the `DashboardComponent` that the user has selected the hero. The `DashboardHeroComponent` is embedded in the `DashboardComponent` template like this: ``` <dashboard-hero *ngFor="let hero of heroes" class="col-1-4" [hero]=hero (selected)="gotoDetail($event)" > </dashboard-hero> ``` The `DashboardHeroComponent` appears in an `*[ngFor](../api/common/ngfor)` repeater, which sets each component's `hero` input property to the looping value and listens for the component's `selected` event. Here's the component's full definition: ``` @Component({ selector: 'dashboard-hero', template: ` <button type="button" (click)="click()" class="hero"> {{hero.name | uppercase}} </button> `, styleUrls: [ './dashboard-hero.component.css' ] }) export class DashboardHeroComponent { @Input() hero!: Hero; @Output() selected = new EventEmitter<Hero>(); click() { this.selected.emit(this.hero); } } ``` While testing a component this simple has little intrinsic value, it's worth knowing how. Use one of these approaches: * Test it as used by `DashboardComponent` * Test it as a stand-alone component * Test it as used by a substitute for `DashboardComponent` A quick look at the `DashboardComponent` constructor discourages the first approach: ``` constructor( private router: Router, private heroService: HeroService) { } ``` The `DashboardComponent` depends on the Angular router and the `HeroService`. You'd probably have to replace them both with test doubles, which is a lot of work. The router seems particularly challenging. > The [following discussion](testing-components-scenarios#routing-component) covers testing components that require the router. > > The immediate goal is to test the `DashboardHeroComponent`, not the `DashboardComponent`, so, try the second and third options. #### Test `DashboardHeroComponent` stand-alone Here's the meat of the spec file setup. ``` TestBed .configureTestingModule({declarations: [DashboardHeroComponent]}) fixture = TestBed.createComponent(DashboardHeroComponent); comp = fixture.componentInstance; // find the hero's DebugElement and element heroDe = fixture.debugElement.query(By.css('.hero')); heroEl = heroDe.nativeElement; // mock the hero supplied by the parent component expectedHero = {id: 42, name: 'Test Name'}; // simulate the parent setting the input property with that hero comp.hero = expectedHero; // trigger initial data binding fixture.detectChanges(); ``` Notice how the setup code assigns a test hero (`expectedHero`) to the component's `hero` property, emulating the way the `DashboardComponent` would set it using the property binding in its repeater. The following test verifies that the hero name is propagated to the template using a binding. ``` it('should display hero name in uppercase', () => { const expectedPipedName = expectedHero.name.toUpperCase(); expect(heroEl.textContent).toContain(expectedPipedName); }); ``` Because the [template](testing-components-scenarios#dashboard-hero-component) passes the hero name through the Angular `[UpperCasePipe](../api/common/uppercasepipe)`, the test must match the element value with the upper-cased name. > This small test demonstrates how Angular tests can verify a component's visual representation —something not possible with [component class tests](testing-components-basics#component-class-testing)— at low cost and without resorting to much slower and more complicated end-to-end tests. > > #### Clicking Clicking the hero should raise a `selected` event that the host component (`DashboardComponent` presumably) can hear: ``` it('should raise selected event when clicked (triggerEventHandler)', () => { let selectedHero: Hero | undefined; comp.selected.pipe(first()).subscribe((hero: Hero) => selectedHero = hero); heroDe.triggerEventHandler('click'); expect(selectedHero).toBe(expectedHero); }); ``` The component's `selected` property returns an `[EventEmitter](../api/core/eventemitter)`, which looks like an RxJS synchronous `Observable` to consumers. The test subscribes to it *explicitly* just as the host component does *implicitly*. If the component behaves as expected, clicking the hero's element should tell the component's `selected` property to emit the `hero` object. The test detects that event through its subscription to `selected`. #### `triggerEventHandler` The `heroDe` in the previous test is a `[DebugElement](../api/core/debugelement)` that represents the hero `<div>`. It has Angular properties and methods that abstract interaction with the native element. This test calls the `DebugElement.triggerEventHandler` with the "click" event name. The "click" event binding responds by calling `DashboardHeroComponent.click()`. The Angular `DebugElement.triggerEventHandler` can raise *any data-bound event* by its *event name*. The second parameter is the event object passed to the handler. The test triggered a "click" event. ``` heroDe.triggerEventHandler('click'); ``` In this case, the test correctly assumes that the runtime event handler, the component's `click()` method, doesn't care about the event object. > Other handlers are less forgiving. For example, the `[RouterLink](../api/router/routerlink)` directive expects an object with a `button` property that identifies which mouse button, if any, was pressed during the click. The `[RouterLink](../api/router/routerlink)` directive throws an error if the event object is missing. > > #### Click the element The following test alternative calls the native element's own `click()` method, which is perfectly fine for *this component*. ``` it('should raise selected event when clicked (element.click)', () => { let selectedHero: Hero | undefined; comp.selected.pipe(first()).subscribe((hero: Hero) => selectedHero = hero); heroEl.click(); expect(selectedHero).toBe(expectedHero); }); ``` #### `click()` helper Clicking a button, an anchor, or an arbitrary HTML element is a common test task. Make that consistent and straightforward by encapsulating the *click-triggering* process in a helper such as the following `click()` function: ``` /** Button events to pass to `DebugElement.triggerEventHandler` for RouterLink event handler */ export const ButtonClickEvents = { left: { button: 0 }, right: { button: 2 } }; /** Simulate element click. Defaults to mouse left-button click event. */ export function click(el: DebugElement | HTMLElement, eventObj: any = ButtonClickEvents.left): void { if (el instanceof HTMLElement) { el.click(); } else { el.triggerEventHandler('click', eventObj); } } ``` The first parameter is the *element-to-click*. If you want, pass a custom event object as the second parameter. The default is a partial [left-button mouse event object](https://developer.mozilla.org/docs/Web/API/MouseEvent/button) accepted by many handlers including the `[RouterLink](../api/router/routerlink)` directive. > The `click()` helper function is **not** one of the Angular testing utilities. It's a function defined in *this guide's sample code*. All of the sample tests use it. If you like it, add it to your own collection of helpers. > > Here's the previous test, rewritten using the click helper. ``` it('should raise selected event when clicked (click helper with DebugElement)', () => { let selectedHero: Hero | undefined; comp.selected.pipe(first()).subscribe((hero: Hero) => selectedHero = hero); click(heroDe); // click helper with DebugElement expect(selectedHero).toBe(expectedHero); }); ``` Component inside a test host ---------------------------- The previous tests played the role of the host `DashboardComponent` themselves. But does the `DashboardHeroComponent` work correctly when properly data-bound to a host component? You could test with the actual `DashboardComponent`. But doing so could require a lot of setup, especially when its template features an `*[ngFor](../api/common/ngfor)` repeater, other components, layout HTML, additional bindings, a constructor that injects multiple services, and it starts interacting with those services right away. Imagine the effort to disable these distractions, just to prove a point that can be made satisfactorily with a *test host* like this one: ``` @Component({ template: ` <dashboard-hero [hero]="hero" (selected)="onSelected($event)"> </dashboard-hero>` }) class TestHostComponent { hero: Hero = {id: 42, name: 'Test Name'}; selectedHero: Hero | undefined; onSelected(hero: Hero) { this.selectedHero = hero; } } ``` This test host binds to `DashboardHeroComponent` as the `DashboardComponent` would but without the noise of the `[Router](../api/router/router)`, the `HeroService`, or the `*[ngFor](../api/common/ngfor)` repeater. The test host sets the component's `hero` input property with its test hero. It binds the component's `selected` event with its `onSelected` handler, which records the emitted hero in its `selectedHero` property. Later, the tests will be able to check `selectedHero` to verify that the `DashboardHeroComponent.selected` event emitted the expected hero. The setup for the `test-host` tests is similar to the setup for the stand-alone tests: ``` TestBed .configureTestingModule({declarations: [DashboardHeroComponent, TestHostComponent]}) // create TestHostComponent instead of DashboardHeroComponent fixture = TestBed.createComponent(TestHostComponent); testHost = fixture.componentInstance; heroEl = fixture.nativeElement.querySelector('.hero'); fixture.detectChanges(); // trigger initial data binding ``` This testing module configuration shows three important differences: * It *declares* both the `DashboardHeroComponent` and the `TestHostComponent` * It *creates* the `TestHostComponent` instead of the `DashboardHeroComponent` * The `TestHostComponent` sets the `DashboardHeroComponent.hero` with a binding The `[createComponent](../api/core/createcomponent)` returns a `fixture` that holds an instance of `TestHostComponent` instead of an instance of `DashboardHeroComponent`. Creating the `TestHostComponent` has the side effect of creating a `DashboardHeroComponent` because the latter appears within the template of the former. The query for the hero element (`heroEl`) still finds it in the test DOM, albeit at greater depth in the element tree than before. The tests themselves are almost identical to the stand-alone version: ``` it('should display hero name', () => { const expectedPipedName = testHost.hero.name.toUpperCase(); expect(heroEl.textContent).toContain(expectedPipedName); }); it('should raise selected event when clicked', () => { click(heroEl); // selected hero should be the same data bound hero expect(testHost.selectedHero).toBe(testHost.hero); }); ``` Only the selected event test differs. It confirms that the selected `DashboardHeroComponent` hero really does find its way up through the event binding to the host component. Routing component ----------------- A *routing component* is a component that tells the `[Router](../api/router/router)` to navigate to another component. The `DashboardComponent` is a *routing component* because the user can navigate to the `HeroDetailComponent` by clicking on one of the *hero buttons* on the dashboard. Routing is pretty complicated. Testing the `DashboardComponent` seemed daunting in part because it involves the `[Router](../api/router/router)`, which it injects together with the `HeroService`. ``` constructor( private router: Router, private heroService: HeroService) { } ``` Mocking the `HeroService` with a spy is a [familiar story](testing-components-scenarios#component-with-async-service). But the `[Router](../api/router/router)` has a complicated API and is entwined with other services and application preconditions. Might it be difficult to mock? Fortunately, not in this case because the `DashboardComponent` isn't doing much with the `[Router](../api/router/router)` ``` gotoDetail(hero: Hero) { const url = `/heroes/${hero.id}`; this.router.navigateByUrl(url); } ``` This is often the case with *routing components*. As a rule you test the component, not the router, and care only if the component navigates with the right address under the given conditions. Providing a router spy for *this component* test suite happens to be as easy as providing a `HeroService` spy. ``` const routerSpy = jasmine.createSpyObj('Router', ['navigateByUrl']); const heroServiceSpy = jasmine.createSpyObj('HeroService', ['getHeroes']); TestBed .configureTestingModule({ providers: [ {provide: HeroService, useValue: heroServiceSpy}, {provide: Router, useValue: routerSpy} ] }) ``` The following test clicks the displayed hero and confirms that `Router.navigateByUrl` is called with the expected url. ``` it('should tell ROUTER to navigate when hero clicked', () => { heroClick(); // trigger click on first inner <div class="hero"> // args passed to router.navigateByUrl() spy const spy = router.navigateByUrl as jasmine.Spy; const navArgs = spy.calls.first().args[0]; // expecting to navigate to id of the component's first hero const id = comp.heroes[0].id; expect(navArgs) .withContext('should nav to HeroDetail for first hero') .toBe('/heroes/' + id); }); ``` Routed components ----------------- A *routed component* is the destination of a `[Router](../api/router/router)` navigation. It can be trickier to test, especially when the route to the component *includes parameters*. The `HeroDetailComponent` is a *routed component* that is the destination of such a route. When a user clicks a *Dashboard* hero, the `DashboardComponent` tells the `[Router](../api/router/router)` to navigate to `heroes/:id`. The `:id` is a route parameter whose value is the `id` of the hero to edit. The `[Router](../api/router/router)` matches that URL to a route to the `HeroDetailComponent`. It creates an `[ActivatedRoute](../api/router/activatedroute)` object with the routing information and injects it into a new instance of the `HeroDetailComponent`. Here's the `HeroDetailComponent` constructor: ``` constructor( private heroDetailService: HeroDetailService, private route: ActivatedRoute, private router: Router) { } ``` The `HeroDetail` component needs the `id` parameter so it can fetch the corresponding hero using the `HeroDetailService`. The component has to get the `id` from the `[ActivatedRoute.paramMap](../api/router/activatedroute#paramMap)` property which is an `Observable`. It can't just reference the `id` property of the `[ActivatedRoute.paramMap](../api/router/activatedroute#paramMap)`. The component has to *subscribe* to the `[ActivatedRoute.paramMap](../api/router/activatedroute#paramMap)` observable and be prepared for the `id` to change during its lifetime. ``` ngOnInit(): void { // get hero when `id` param changes this.route.paramMap.subscribe(pmap => this.getHero(pmap.get('id'))); } ``` > The [ActivatedRoute in action](router-tutorial-toh#activated-route-in-action) section of the [Router tutorial: tour of heroes](router-tutorial-toh) guide covers `[ActivatedRoute.paramMap](../api/router/activatedroute#paramMap)` in more detail. > > Tests can explore how the `HeroDetailComponent` responds to different `id` parameter values by manipulating the `[ActivatedRoute](../api/router/activatedroute)` injected into the component's constructor. You know how to spy on the `[Router](../api/router/router)` and a data service. You'll take a different approach with `[ActivatedRoute](../api/router/activatedroute)` because * `paramMap` returns an `Observable` that can emit more than one value during a test * You need the router helper function, `[convertToParamMap](../api/router/converttoparammap)()`, to create a `[ParamMap](../api/router/parammap)` * Other *routed component* tests need a test double for `[ActivatedRoute](../api/router/activatedroute)` These differences argue for a re-usable stub class. #### `ActivatedRouteStub` The following `ActivatedRouteStub` class serves as a test double for `[ActivatedRoute](../api/router/activatedroute)`. ``` import { convertToParamMap, ParamMap, Params } from '@angular/router'; import { ReplaySubject } from 'rxjs'; /** * An ActivateRoute test double with a `paramMap` observable. * Use the `setParamMap()` method to add the next `paramMap` value. */ export class ActivatedRouteStub { // Use a ReplaySubject to share previous values with subscribers // and pump new values into the `paramMap` observable private subject = new ReplaySubject<ParamMap>(); constructor(initialParams?: Params) { this.setParamMap(initialParams); } /** The mock paramMap observable */ readonly paramMap = this.subject.asObservable(); /** Set the paramMap observable's next value */ setParamMap(params: Params = {}) { this.subject.next(convertToParamMap(params)); } } ``` Consider placing such helpers in a `testing` folder sibling to the `app` folder. This sample puts `ActivatedRouteStub` in `testing/activated-route-stub.ts`. > Consider writing a more capable version of this stub class with the [*marble testing library*](testing-components-scenarios#marble-testing). > > #### Testing with `ActivatedRouteStub` Here's a test demonstrating the component's behavior when the observed `id` refers to an existing hero: ``` describe('when navigate to existing hero', () => { let expectedHero: Hero; beforeEach(async () => { expectedHero = firstHero; activatedRoute.setParamMap({id: expectedHero.id}); await createComponent(); }); it("should display that hero's name", () => { expect(page.nameDisplay.textContent).toBe(expectedHero.name); }); }); ``` > In the following section, the `[createComponent](../api/core/createcomponent)()` method and `page` object are discussed. Rely on your intuition for now. > > When the `id` cannot be found, the component should re-route to the `HeroListComponent`. The test suite setup provided the same router spy [described above](testing-components-scenarios#routing-component) which spies on the router without actually navigating. This test expects the component to try to navigate to the `HeroListComponent`. ``` describe('when navigate to non-existent hero id', () => { beforeEach(async () => { activatedRoute.setParamMap({id: 99999}); await createComponent(); }); it('should try to navigate back to hero list', () => { expect(page.gotoListSpy.calls.any()) .withContext('comp.gotoList called') .toBe(true); expect(page.navigateSpy.calls.any()) .withContext('router.navigate called') .toBe(true); }); }); ``` While this application doesn't have a route to the `HeroDetailComponent` that omits the `id` parameter, it might add such a route someday. The component should do something reasonable when there is no `id`. In this implementation, the component should create and display a new hero. New heroes have `id=0` and a blank `name`. This test confirms that the component behaves as expected: ``` describe('when navigate with no hero id', () => { beforeEach(async () => { await createComponent(); }); it('should have hero.id === 0', () => { expect(component.hero.id).toBe(0); }); it('should display empty hero name', () => { expect(page.nameDisplay.textContent).toBe(''); }); }); ``` Nested component tests ---------------------- Component templates often have nested components, whose templates might contain more components. The component tree can be very deep and, most of the time, the nested components play no role in testing the component at the top of the tree. The `AppComponent`, for example, displays a navigation bar with anchors and their `[RouterLink](../api/router/routerlink)` directives. ``` <app-banner></app-banner> <app-welcome></app-welcome> <nav> <a routerLink="/dashboard">Dashboard</a> <a routerLink="/heroes">Heroes</a> <a routerLink="/about">About</a> </nav> <router-outlet></router-outlet> ``` While the `AppComponent` *class* is empty, you might want to write unit tests to confirm that the links are wired properly to the `[RouterLink](../api/router/routerlink)` directives, perhaps for the reasons as explained in the [following section](testing-components-scenarios#why-stubbed-routerlink-tests). To validate the links, you don't need the `[Router](../api/router/router)` to navigate and you don't need the `<[router-outlet](../api/router/routeroutlet)>` to mark where the `[Router](../api/router/router)` inserts *routed components*. The `BannerComponent` and `WelcomeComponent` (indicated by `<app-banner>` and `<app-welcome>`) are also irrelevant. Yet any test that creates the `AppComponent` in the DOM also creates instances of these three components and, if you let that happen, you'll have to configure the `[TestBed](../api/core/testing/testbed)` to create them. If you neglect to declare them, the Angular compiler won't recognize the `<app-banner>`, `<app-welcome>`, and `<[router-outlet](../api/router/routeroutlet)>` tags in the `AppComponent` template and will throw an error. If you declare the real components, you'll also have to declare *their* nested components and provide for *all* services injected in *any* component in the tree. That's too much effort just to answer a few simple questions about links. This section describes two techniques for minimizing the setup. Use them, alone or in combination, to stay focused on testing the primary component. ##### Stubbing unneeded components In the first technique, you create and declare stub versions of the components and directive that play little or no role in the tests. ``` @Component({selector: 'app-banner', template: ''}) class BannerStubComponent { } @Component({selector: 'router-outlet', template: ''}) class RouterOutletStubComponent { } @Component({selector: 'app-welcome', template: ''}) class WelcomeStubComponent { } ``` The stub selectors match the selectors for the corresponding real components. But their templates and classes are empty. Then declare them in the `[TestBed](../api/core/testing/testbed)` configuration next to the components, directives, and pipes that need to be real. ``` TestBed .configureTestingModule({ declarations: [ AppComponent, RouterLinkDirectiveStub, BannerStubComponent, RouterOutletStubComponent, WelcomeStubComponent ] }) ``` The `AppComponent` is the test subject, so of course you declare the real version. The `RouterLinkDirectiveStub`, [described later](testing-components-scenarios#routerlink), is a test version of the real `[RouterLink](../api/router/routerlink)` that helps with the link tests. The rest are stubs. #### `[NO\_ERRORS\_SCHEMA](../api/core/no_errors_schema)` In the second approach, add `[NO\_ERRORS\_SCHEMA](../api/core/no_errors_schema)` to the `TestBed.schemas` metadata. ``` TestBed .configureTestingModule({ declarations: [ AppComponent, RouterLinkDirectiveStub ], schemas: [NO_ERRORS_SCHEMA] }) ``` The `[NO\_ERRORS\_SCHEMA](../api/core/no_errors_schema)` tells the Angular compiler to ignore unrecognized elements and attributes. The compiler recognizes the `<app-root>` element and the `[routerLink](../api/router/routerlink)` attribute because you declared a corresponding `AppComponent` and `RouterLinkDirectiveStub` in the `[TestBed](../api/core/testing/testbed)` configuration. But the compiler won't throw an error when it encounters `<app-banner>`, `<app-welcome>`, or `<[router-outlet](../api/router/routeroutlet)>`. It simply renders them as empty tags and the browser ignores them. You no longer need the stub components. #### Use both techniques together These are techniques for *Shallow Component Testing*, so-named because they reduce the visual surface of the component to just those elements in the component's template that matter for tests. The `[NO\_ERRORS\_SCHEMA](../api/core/no_errors_schema)` approach is the easier of the two but don't overuse it. The `[NO\_ERRORS\_SCHEMA](../api/core/no_errors_schema)` also prevents the compiler from telling you about the missing components and attributes that you omitted inadvertently or misspelled. You could waste hours chasing phantom bugs that the compiler would have caught in an instant. The *stub component* approach has another advantage. While the stubs in *this* example were empty, you could give them stripped-down templates and classes if your tests need to interact with them in some way. In practice you will combine the two techniques in the same setup, as seen in this example. ``` TestBed .configureTestingModule({ declarations: [ AppComponent, BannerStubComponent, RouterLinkDirectiveStub ], schemas: [NO_ERRORS_SCHEMA] }) ``` The Angular compiler creates the `BannerStubComponent` for the `<app-banner>` element and applies the `RouterLinkStubDirective` to the anchors with the `[routerLink](../api/router/routerlink)` attribute, but it ignores the `<app-welcome>` and `<[router-outlet](../api/router/routeroutlet)>` tags. Components with `[RouterLink](../api/router/routerlink)` -------------------------------------------------------- The real `RouterLinkDirective` is quite complicated and entangled with other components and directives of the `[RouterModule](../api/router/routermodule)`. It requires challenging setup to mock and use in tests. The `RouterLinkDirectiveStub` in this sample code replaces the real directive with an alternative version designed to validate the kind of anchor tag wiring seen in the `AppComponent` template. ``` @Directive({ selector: '[routerLink]' }) export class RouterLinkDirectiveStub { @Input('routerLink') linkParams: any; navigatedTo: any = null; @HostListener('click') onClick() { this.navigatedTo = this.linkParams; } } ``` The URL bound to the `[[routerLink](../api/router/routerlink)]` attribute flows in to the directive's `linkParams` property. The `[HostListener](../api/core/hostlistener)` wires the click event of the host element (the `<a>` anchor elements in `AppComponent`) to the stub directive's `onClick` method. Clicking the anchor should trigger the `onClick()` method, which sets the stub's telltale `navigatedTo` property. Tests inspect `navigatedTo` to confirm that clicking the anchor sets the expected route definition. > Whether the router is configured properly to navigate with that route definition is a question for a separate set of tests. > > #### `By.directive` and injected directives A little more setup triggers the initial data binding and gets references to the navigation links: ``` beforeEach(() => { fixture.detectChanges(); // trigger initial data binding // find DebugElements with an attached RouterLinkStubDirective linkDes = fixture.debugElement.queryAll(By.directive(RouterLinkDirectiveStub)); // get attached link directive instances // using each DebugElement's injector routerLinks = linkDes.map(de => de.injector.get(RouterLinkDirectiveStub)); }); ``` Three points of special interest: * Locate the anchor elements with an attached directive using `By.directive` * The query returns `[DebugElement](../api/core/debugelement)` wrappers around the matching elements * Each `[DebugElement](../api/core/debugelement)` exposes a dependency injector with the specific instance of the directive attached to that element The `AppComponent` links to validate are as follows: ``` <nav> <a routerLink="/dashboard">Dashboard</a> <a routerLink="/heroes">Heroes</a> <a routerLink="/about">About</a> </nav> ``` Here are some tests that confirm those links are wired to the `[routerLink](../api/router/routerlink)` directives as expected: ``` it('can get RouterLinks from template', () => { expect(routerLinks.length) .withContext('should have 3 routerLinks') .toBe(3); expect(routerLinks[0].linkParams).toBe('/dashboard'); expect(routerLinks[1].linkParams).toBe('/heroes'); expect(routerLinks[2].linkParams).toBe('/about'); }); it('can click Heroes link in template', () => { const heroesLinkDe = linkDes[1]; // heroes link DebugElement const heroesLink = routerLinks[1]; // heroes link directive expect(heroesLink.navigatedTo) .withContext('should not have navigated yet') .toBeNull(); heroesLinkDe.triggerEventHandler('click'); fixture.detectChanges(); expect(heroesLink.navigatedTo).toBe('/heroes'); }); ``` > The "click" test *in this example* is misleading. It tests the `RouterLinkDirectiveStub` rather than the *component*. This is a common failing of directive stubs. > > It has a legitimate purpose in this guide. It demonstrates how to find a `[RouterLink](../api/router/routerlink)` element, click it, and inspect a result, without engaging the full router machinery. This is a skill you might need to test a more sophisticated component, one that changes the display, re-calculates parameters, or re-arranges navigation options when the user clicks the link. > > #### What good are these tests? Stubbed `[RouterLink](../api/router/routerlink)` tests can confirm that a component with links and an outlet is set up properly, that the component has the links it should have, and that they are all pointing in the expected direction. These tests do not concern whether the application will succeed in navigating to the target component when the user clicks a link. Stubbing the RouterLink and RouterOutlet is the best option for such limited testing goals. Relying on the real router would make them brittle. They could fail for reasons unrelated to the component. For example, a navigation guard could prevent an unauthorized user from visiting the `HeroListComponent`. That's not the fault of the `AppComponent` and no change to that component could cure the failed test. A *different* battery of tests can explore whether the application navigates as expected in the presence of conditions that influence guards such as whether the user is authenticated and authorized. > A future guide update explains how to write such tests with the `[RouterTestingModule](../api/router/testing/routertestingmodule)`. > > Use a `page` object ------------------- The `HeroDetailComponent` is a simple view with a title, two hero fields, and two buttons. But there's plenty of template complexity even in this simple form. ``` <div *ngIf="hero"> <h2><span>{{hero.name | titlecase}}</span> Details</h2> <div> <span>id: </span>{{hero.id}}</div> <div> <label for="name">name: </label> <input id="name" [(ngModel)]="hero.name" placeholder="name" /> </div> <button type="button" (click)="save()">Save</button> <button type="button" (click)="cancel()">Cancel</button> </div> ``` Tests that exercise the component need … * To wait until a hero arrives before elements appear in the DOM * A reference to the title text * A reference to the name input box to inspect and set it * References to the two buttons so they can click them * Spies for some of the component and router methods Even a small form such as this one can produce a mess of tortured conditional setup and CSS element selection. Tame the complexity with a `Page` class that handles access to component properties and encapsulates the logic that sets them. Here is such a `Page` class for the `hero-detail.component.spec.ts` ``` class Page { // getter properties wait to query the DOM until called. get buttons() { return this.queryAll<HTMLButtonElement>('button'); } get saveBtn() { return this.buttons[0]; } get cancelBtn() { return this.buttons[1]; } get nameDisplay() { return this.query<HTMLElement>('span'); } get nameInput() { return this.query<HTMLInputElement>('input'); } gotoListSpy: jasmine.Spy; navigateSpy: jasmine.Spy; constructor(someFixture: ComponentFixture<HeroDetailComponent>) { // get the navigate spy from the injected router spy object const routerSpy = someFixture.debugElement.injector.get(Router) as any; this.navigateSpy = routerSpy.navigate; // spy on component's `gotoList()` method const someComponent = someFixture.componentInstance; this.gotoListSpy = spyOn(someComponent, 'gotoList').and.callThrough(); } //// query helpers //// private query<T>(selector: string): T { return fixture.nativeElement.querySelector(selector); } private queryAll<T>(selector: string): T[] { return fixture.nativeElement.querySelectorAll(selector); } } ``` Now the important hooks for component manipulation and inspection are neatly organized and accessible from an instance of `Page`. A `[createComponent](../api/core/createcomponent)` method creates a `page` object and fills in the blanks once the `hero` arrives. ``` /** Create the HeroDetailComponent, initialize it, set test variables */ function createComponent() { fixture = TestBed.createComponent(HeroDetailComponent); component = fixture.componentInstance; page = new Page(fixture); // 1st change detection triggers ngOnInit which gets a hero fixture.detectChanges(); return fixture.whenStable().then(() => { // 2nd change detection displays the async-fetched hero fixture.detectChanges(); }); } ``` The [`HeroDetailComponent` tests](testing-components-scenarios#tests-w-test-double) in an earlier section demonstrate how `[createComponent](../api/core/createcomponent)` and `page` keep the tests short and *on message*. There are no distractions: no waiting for promises to resolve and no searching the DOM for element values to compare. Here are a few more `HeroDetailComponent` tests to reinforce the point. ``` it("should display that hero's name", () => { expect(page.nameDisplay.textContent).toBe(expectedHero.name); }); it('should navigate when click cancel', () => { click(page.cancelBtn); expect(page.navigateSpy.calls.any()) .withContext('router.navigate called') .toBe(true); }); it('should save when click save but not navigate immediately', () => { // Get service injected into component and spy on its`saveHero` method. // It delegates to fake `HeroService.updateHero` which delivers a safe test result. const hds = fixture.debugElement.injector.get(HeroDetailService); const saveSpy = spyOn(hds, 'saveHero').and.callThrough(); click(page.saveBtn); expect(saveSpy.calls.any()) .withContext('HeroDetailService.save called') .toBe(true); expect(page.navigateSpy.calls.any()) .withContext('router.navigate not called') .toBe(false); }); it('should navigate when click save and save resolves', fakeAsync(() => { click(page.saveBtn); tick(); // wait for async save to complete expect(page.navigateSpy.calls.any()) .withContext('router.navigate called') .toBe(true); })); it('should convert hero name to Title Case', () => { // get the name's input and display elements from the DOM const hostElement: HTMLElement = fixture.nativeElement; const nameInput: HTMLInputElement = hostElement.querySelector('input')!; const nameDisplay: HTMLElement = hostElement.querySelector('span')!; // simulate user entering a new name into the input box nameInput.value = 'quick BROWN fOx'; // Dispatch a DOM event so that Angular learns of input value change. nameInput.dispatchEvent(new Event('input')); // Tell Angular to update the display binding through the title pipe fixture.detectChanges(); expect(nameDisplay.textContent).toBe('Quick Brown Fox'); }); ``` Calling `compileComponents()` ----------------------------- > Ignore this section if you *only* run tests with the CLI `ng test` command because the CLI compiles the application before running the tests. > > If you run tests in a **non-CLI environment**, the tests might fail with a message like this one: ``` Error: This test module uses the component BannerComponent which is using a "templateUrl" or "styleUrls", but they were never compiled. Please call "TestBed.compileComponents" before your test. ``` The root of the problem is at least one of the components involved in the test specifies an external template or CSS file as the following version of the `BannerComponent` does. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-banner', templateUrl: './banner-external.component.html', styleUrls: ['./banner-external.component.css'] }) export class BannerComponent { title = 'Test Tour of Heroes'; } ``` The test fails when the `[TestBed](../api/core/testing/testbed)` tries to create the component. ``` beforeEach(async () => { await TestBed.configureTestingModule({ declarations: [ BannerComponent ], }); // missing call to compileComponents() fixture = TestBed.createComponent(BannerComponent); }); ``` Recall that the application hasn't been compiled. So when you call `[createComponent](../api/core/createcomponent)()`, the `[TestBed](../api/core/testing/testbed)` compiles implicitly. That's not a problem when the source code is in memory. But the `BannerComponent` requires external files that the compiler must read from the file system, an inherently *asynchronous* operation. If the `[TestBed](../api/core/testing/testbed)` were allowed to continue, the tests would run and fail mysteriously before the compiler could finish. The preemptive error message tells you to compile explicitly with `compileComponents()`. #### `compileComponents()` is async You must call `compileComponents()` within an asynchronous test function. > If you neglect to make the test function async (for example, forget to use `[waitForAsync](../api/core/testing/waitforasync)()` as described), you'll see this error message > > > ``` > Error: ViewDestroyedError: Attempt to use a destroyed view > ``` > A typical approach is to divide the setup logic into two separate `beforeEach()` functions: | Functions | Details | | --- | --- | | Asynchronous `beforeEach()` | Compiles the components | | Synchronous `beforeEach()` | Performs the remaining setup | #### The async `beforeEach` Write the first async `beforeEach` like this. ``` beforeEach(async () => { await TestBed.configureTestingModule({ declarations: [ BannerComponent ], }).compileComponents(); // compile template and css }); ``` The `TestBed.configureTestingModule()` method returns the `[TestBed](../api/core/testing/testbed)` class so you can chain calls to other `[TestBed](../api/core/testing/testbed)` static methods such as `compileComponents()`. In this example, the `BannerComponent` is the only component to compile. Other examples configure the testing module with multiple components and might import application modules that hold yet more components. Any of them could require external files. The `TestBed.compileComponents` method asynchronously compiles all components configured in the testing module. > Do not re-configure the `[TestBed](../api/core/testing/testbed)` after calling `compileComponents()`. > > Calling `compileComponents()` closes the current `[TestBed](../api/core/testing/testbed)` instance to further configuration. You cannot call any more `[TestBed](../api/core/testing/testbed)` configuration methods, not `configureTestingModule()` nor any of the `override...` methods. The `[TestBed](../api/core/testing/testbed)` throws an error if you try. Make `compileComponents()` the last step before calling `TestBed.createComponent()`. #### The synchronous `beforeEach` The second, synchronous `beforeEach()` contains the remaining setup steps, which include creating the component and querying for elements to inspect. ``` beforeEach(() => { fixture = TestBed.createComponent(BannerComponent); component = fixture.componentInstance; // BannerComponent test instance h1 = fixture.nativeElement.querySelector('h1'); }); ``` Count on the test runner to wait for the first asynchronous `beforeEach` to finish before calling the second. #### Consolidated setup You can consolidate the two `beforeEach()` functions into a single, async `beforeEach()`. The `compileComponents()` method returns a promise so you can perform the synchronous setup tasks *after* compilation by moving the synchronous code after the `await` keyword, where the promise has been resolved. ``` beforeEach(async () => { await TestBed.configureTestingModule({ declarations: [ BannerComponent ], }).compileComponents(); fixture = TestBed.createComponent(BannerComponent); component = fixture.componentInstance; h1 = fixture.nativeElement.querySelector('h1'); }); ``` #### `compileComponents()` is harmless There's no harm in calling `compileComponents()` when it's not required. The component test file generated by the CLI calls `compileComponents()` even though it is never required when running `ng test`. The tests in this guide only call `compileComponents` when necessary. Setup with module imports ------------------------- Earlier component tests configured the testing module with a few `declarations` like this: ``` TestBed .configureTestingModule({declarations: [DashboardHeroComponent]}) ``` The `DashboardComponent` is simple. It needs no help. But more complex components often depend on other components, directives, pipes, and providers and these must be added to the testing module too. Fortunately, the `TestBed.configureTestingModule` parameter parallels the metadata passed to the `@[NgModule](../api/core/ngmodule)` decorator which means you can also specify `providers` and `imports`. The `HeroDetailComponent` requires a lot of help despite its small size and simple construction. In addition to the support it receives from the default testing module `[CommonModule](../api/common/commonmodule)`, it needs: * `[NgModel](../api/forms/ngmodel)` and friends in the `[FormsModule](../api/forms/formsmodule)` to enable two-way data binding * The `[TitleCasePipe](../api/common/titlecasepipe)` from the `shared` folder * The Router services that these tests are stubbing out * The Hero data access services that are also stubbed out One approach is to configure the testing module from the individual pieces as in this example: ``` beforeEach(async () => { const routerSpy = createRouterSpy(); await TestBed .configureTestingModule({ imports: [FormsModule], declarations: [HeroDetailComponent, TitleCasePipe], providers: [ {provide: ActivatedRoute, useValue: activatedRoute}, {provide: HeroService, useClass: TestHeroService}, {provide: Router, useValue: routerSpy}, ] }) .compileComponents(); }); ``` > Notice that the `beforeEach()` is asynchronous and calls `TestBed.compileComponents` because the `HeroDetailComponent` has an external template and css file. > > As explained in [Calling `compileComponents()`](testing-components-scenarios#compile-components), these tests could be run in a non-CLI environment where Angular would have to compile them in the browser. > > #### Import a shared module Because many application components need the `[FormsModule](../api/forms/formsmodule)` and the `[TitleCasePipe](../api/common/titlecasepipe)`, the developer created a `SharedModule` to combine these and other frequently requested parts. The test configuration can use the `SharedModule` too as seen in this alternative setup: ``` beforeEach(async () => { const routerSpy = createRouterSpy(); await TestBed .configureTestingModule({ imports: [SharedModule], declarations: [HeroDetailComponent], providers: [ {provide: ActivatedRoute, useValue: activatedRoute}, {provide: HeroService, useClass: TestHeroService}, {provide: Router, useValue: routerSpy}, ] }) .compileComponents(); }); ``` It's a bit tighter and smaller, with fewer import statements, which are not shown in this example. #### Import a feature module The `HeroDetailComponent` is part of the `HeroModule` [Feature Module](feature-modules) that aggregates more of the interdependent pieces including the `SharedModule`. Try a test configuration that imports the `HeroModule` like this one: ``` beforeEach(async () => { const routerSpy = createRouterSpy(); await TestBed .configureTestingModule({ imports: [HeroModule], providers: [ {provide: ActivatedRoute, useValue: activatedRoute}, {provide: HeroService, useClass: TestHeroService}, {provide: Router, useValue: routerSpy}, ] }) .compileComponents(); }); ``` That's *really* crisp. Only the *test doubles* in the `providers` remain. Even the `HeroDetailComponent` declaration is gone. In fact, if you try to declare it, Angular will throw an error because `HeroDetailComponent` is declared in both the `HeroModule` and the `DynamicTestModule` created by the `[TestBed](../api/core/testing/testbed)`. > Importing the component's feature module can be the best way to configure tests when there are many mutual dependencies within the module and the module is small, as feature modules tend to be. > > Override component providers ---------------------------- The `HeroDetailComponent` provides its own `HeroDetailService`. ``` @Component({ selector: 'app-hero-detail', templateUrl: './hero-detail.component.html', styleUrls: ['./hero-detail.component.css' ], providers: [ HeroDetailService ] }) export class HeroDetailComponent implements OnInit { constructor( private heroDetailService: HeroDetailService, private route: ActivatedRoute, private router: Router) { } } ``` It's not possible to stub the component's `HeroDetailService` in the `providers` of the `TestBed.configureTestingModule`. Those are providers for the *testing module*, not the component. They prepare the dependency injector at the *fixture level*. Angular creates the component with its *own* injector, which is a *child* of the fixture injector. It registers the component's providers (the `HeroDetailService` in this case) with the child injector. A test cannot get to child injector services from the fixture injector. And `TestBed.configureTestingModule` can't configure them either. Angular has created new instances of the real `HeroDetailService` all along! > These tests could fail or timeout if the `HeroDetailService` made its own XHR calls to a remote server. There might not be a remote server to call. > > Fortunately, the `HeroDetailService` delegates responsibility for remote data access to an injected `HeroService`. > > > ``` > @Injectable() > export class HeroDetailService { > constructor(private heroService: HeroService) { } > /* . . . */ > } > ``` > The [previous test configuration](testing-components-scenarios#feature-module-import) replaces the real `HeroService` with a `TestHeroService` that intercepts server requests and fakes their responses. > > What if you aren't so lucky. What if faking the `HeroService` is hard? What if `HeroDetailService` makes its own server requests? The `TestBed.overrideComponent` method can replace the component's `providers` with easy-to-manage *test doubles* as seen in the following setup variation: ``` beforeEach(async () => { const routerSpy = createRouterSpy(); await TestBed .configureTestingModule({ imports: [HeroModule], providers: [ {provide: ActivatedRoute, useValue: activatedRoute}, {provide: Router, useValue: routerSpy}, ] }) // Override component's own provider .overrideComponent( HeroDetailComponent, {set: {providers: [{provide: HeroDetailService, useClass: HeroDetailServiceSpy}]}}) .compileComponents(); }); ``` Notice that `TestBed.configureTestingModule` no longer provides a fake `HeroService` because it's [not needed](testing-components-scenarios#spy-stub). #### The `overrideComponent` method Focus on the `overrideComponent` method. ``` .overrideComponent( HeroDetailComponent, {set: {providers: [{provide: HeroDetailService, useClass: HeroDetailServiceSpy}]}}) ``` It takes two arguments: the component type to override (`HeroDetailComponent`) and an override metadata object. The [override metadata object](testing-utility-apis#metadata-override-object) is a generic defined as follows: ``` type MetadataOverride<T> = { add?: Partial<T>; remove?: Partial<T>; set?: Partial<T>; }; ``` A metadata override object can either add-and-remove elements in metadata properties or completely reset those properties. This example resets the component's `providers` metadata. The type parameter, `T`, is the kind of metadata you'd pass to the `@[Component](../api/core/component)` decorator: ``` selector?: string; template?: string; templateUrl?: string; providers?: any[]; … ``` #### Provide a *spy stub* (`HeroDetailServiceSpy`) This example completely replaces the component's `providers` array with a new array containing a `HeroDetailServiceSpy`. The `HeroDetailServiceSpy` is a stubbed version of the real `HeroDetailService` that fakes all necessary features of that service. It neither injects nor delegates to the lower level `HeroService` so there's no need to provide a test double for that. The related `HeroDetailComponent` tests will assert that methods of the `HeroDetailService` were called by spying on the service methods. Accordingly, the stub implements its methods as spies: ``` class HeroDetailServiceSpy { testHero: Hero = {id: 42, name: 'Test Hero'}; /* emit cloned test hero */ getHero = jasmine.createSpy('getHero').and.callFake( () => asyncData(Object.assign({}, this.testHero))); /* emit clone of test hero, with changes merged in */ saveHero = jasmine.createSpy('saveHero') .and.callFake((hero: Hero) => asyncData(Object.assign(this.testHero, hero))); } ``` #### The override tests Now the tests can control the component's hero directly by manipulating the spy-stub's `testHero` and confirm that service methods were called. ``` let hdsSpy: HeroDetailServiceSpy; beforeEach(async () => { await createComponent(); // get the component's injected HeroDetailServiceSpy hdsSpy = fixture.debugElement.injector.get(HeroDetailService) as any; }); it('should have called `getHero`', () => { expect(hdsSpy.getHero.calls.count()) .withContext('getHero called once') .toBe(1, 'getHero called once'); }); it("should display stub hero's name", () => { expect(page.nameDisplay.textContent).toBe(hdsSpy.testHero.name); }); it('should save stub hero change', fakeAsync(() => { const origName = hdsSpy.testHero.name; const newName = 'New Name'; page.nameInput.value = newName; page.nameInput.dispatchEvent(new Event('input')); // tell Angular expect(component.hero.name) .withContext('component hero has new name') .toBe(newName); expect(hdsSpy.testHero.name) .withContext('service hero unchanged before save') .toBe(origName); click(page.saveBtn); expect(hdsSpy.saveHero.calls.count()) .withContext('saveHero called once') .toBe(1); tick(); // wait for async save to complete expect(hdsSpy.testHero.name) .withContext('service hero has new name after save') .toBe(newName); expect(page.navigateSpy.calls.any()) .withContext('router.navigate called') .toBe(true); })); ``` #### More overrides The `TestBed.overrideComponent` method can be called multiple times for the same or different components. The `[TestBed](../api/core/testing/testbed)` offers similar `overrideDirective`, `overrideModule`, and `overridePipe` methods for digging into and replacing parts of these other classes. Explore the options and combinations on your own. Last reviewed on Mon Feb 28 2022
programming_docs
angular Upgrading for performance Upgrading for performance ========================= > *Angular* is the name for the Angular of today and tomorrow. > > *AngularJS* is the name for all 1.x versions of Angular. > > This guide describes some of the built-in tools for efficiently migrating AngularJS projects over to the Angular platform, one piece at a time. It is very similar to [Upgrading from AngularJS](upgrade) with the exception that this one uses the [downgradeModule()](../api/upgrade/static/downgrademodule) helper function instead of the [UpgradeModule](../api/upgrade/static/upgrademodule) class. This affects how the application is bootstrapped and how change detection is propagated between the two frameworks. It allows you to upgrade incrementally while improving the speed of your hybrid applications and leveraging the latest of Angular in AngularJS applications early in the process of upgrading. Preparation ----------- Before discussing how you can use `[downgradeModule](../api/upgrade/static/downgrademodule)()` to create hybrid apps, there are things that you can do to ease the upgrade process even before you begin upgrading. Because the steps are the same regardless of how you upgrade, refer to the [Preparation](upgrade#preparation) section of [Upgrading from AngularJS](upgrade). Upgrading with `ngUpgrade` -------------------------- With the `ngUpgrade` library in Angular you can upgrade an existing AngularJS application incrementally by building a hybrid app where you can run both frameworks side-by-side. In these hybrid applications you can mix and match AngularJS and Angular components and services and have them interoperate seamlessly. That means you don't have to do the upgrade work all at once as there is a natural coexistence between the two frameworks during the transition period. ### How `ngUpgrade` Works Regardless of whether you choose `[downgradeModule](../api/upgrade/static/downgrademodule)()` or `[UpgradeModule](../api/upgrade/static/upgrademodule)`, the basic principles of upgrading, the mental model behind hybrid apps, and how you use the [upgrade/static](../api/upgrade/static) utilities remain the same. For more information, see the [How `ngUpgrade` Works](upgrade#how-ngupgrade-works) section of [Upgrading from AngularJS](upgrade). > The [Change Detection](upgrade#change-detection) section of [Upgrading from AngularJS](upgrade) only applies to applications that use `[UpgradeModule](../api/upgrade/static/upgrademodule)`. Though you handle change detection differently with `[downgradeModule](../api/upgrade/static/downgrademodule)()`, which is the focus of this guide, reading the [Change Detection](upgrade#change-detection) section provides helpful context for what follows. > > #### Change Detection with `[downgradeModule](../api/upgrade/static/downgrademodule)()` As mentioned before, one of the key differences between `[downgradeModule](../api/upgrade/static/downgrademodule)()` and `[UpgradeModule](../api/upgrade/static/upgrademodule)` has to do with change detection and how it is propagated between the two frameworks. With `[UpgradeModule](../api/upgrade/static/upgrademodule)`, the two change detection systems are tied together more tightly. Whenever something happens in the AngularJS part of the app, change detection is automatically triggered on the Angular part and vice versa. This is convenient as it ensures that neither framework misses an important change. Most of the time, though, these extra change detection runs are unnecessary. `[downgradeModule](../api/upgrade/static/downgrademodule)()`, on the other side, avoids explicitly triggering change detection unless it knows the other part of the application is interested in the changes. For example, if a downgraded component defines an `@[Input](../api/core/input)()`, chances are that the application needs to be aware when that value changes. Thus, `[downgradeComponent](../api/upgrade/static/downgradecomponent)()` automatically triggers change detection on that component. In most cases, though, the changes made locally in a particular component are of no interest to the rest of the application. For example, if the user clicks a button that submits a form, the component usually handles the result of this action. That being said, there *are* cases where you want to propagate changes to some other part of the application that may be controlled by the other framework. In such cases, you are responsible for notifying the interested parties by manually triggering change detection. If you want a particular piece of code to trigger change detection in the AngularJS part of the app, you need to wrap it in [scope.$apply()](https://docs.angularjs.org/api/ng/type/%24rootScope.Scope#%24apply). Similarly, for triggering change detection in Angular you would use [ngZone.run()](../api/core/ngzone#run). In many cases, a few extra change detection runs may not matter much. However, on larger or change-detection-heavy applications they can have a noticeable impact. By giving you more fine-grained control over the change detection propagation, `[downgradeModule](../api/upgrade/static/downgrademodule)()` allows you to achieve better performance for your hybrid applications. Using `[downgradeModule](../api/upgrade/static/downgrademodule)()` ------------------------------------------------------------------ Both AngularJS and Angular have their own concept of modules to help organize an application into cohesive blocks of functionality. Their details are quite different in architecture and implementation. In AngularJS, you create a module by specifying its name and dependencies with [angular.module()](https://docs.angularjs.org/api/ng/function/angular.module). Then you can add assets using its various methods. In Angular, you create a class adorned with an [NgModule](../api/core/ngmodule) decorator that describes assets in metadata. In a hybrid application you run both frameworks at the same time. This means that you need at least one module each from both AngularJS and Angular. For the most part, you specify the modules in the same way you would for a regular application. Then, you use the `[upgrade/static](../api/upgrade/static)` helpers to let the two frameworks know about assets they can use from each other. This is known as "upgrading" and "downgrading". * *Upgrading*: The act of making an AngularJS asset, such as a component or service, available to the Angular part of the application. * *Downgrading*: The act of making an Angular asset, such as a component or service, available to the AngularJS part of the application. An important part of inter-linking dependencies is linking the two main modules together. This is where `[downgradeModule](../api/upgrade/static/downgrademodule)()` comes in. Use it to create an AngularJS module —one that you can use as a dependency in your main AngularJS module— that will bootstrap your main Angular module and kick off the Angular part of the hybrid application. In a sense, it "downgrades" an Angular module to an AngularJS module. There are a few things to remember, though: * You don't pass the Angular module directly to `[downgradeModule](../api/upgrade/static/downgrademodule)()`. All `[downgradeModule](../api/upgrade/static/downgrademodule)()` needs is a "recipe", for example, a factory function, to create an instance for your module. * The Angular module is not instantiated until the application actually needs it. The following is an example of how you can use `[downgradeModule](../api/upgrade/static/downgrademodule)()` to link the two modules. ``` // Import `downgradeModule()`. import { downgradeModule } from '@angular/upgrade/static'; // Use it to downgrade the Angular module to an AngularJS module. const downgradedModule = downgradeModule(MainAngularModuleFactory); // Use the downgraded module as a dependency to the main AngularJS module. angular.module('mainAngularJsModule', [ downgradedModule ]); ``` #### Specifying a factory for the Angular module As mentioned earlier, `[downgradeModule](../api/upgrade/static/downgrademodule)()` needs to know how to instantiate the Angular module. It needs a recipe. You define that recipe by providing a factory function that can create an instance of the Angular module. `[downgradeModule](../api/upgrade/static/downgrademodule)()` accepts two types of factory functions: * `[NgModuleFactory](../api/core/ngmodulefactory)` * `(extraProviders: [StaticProvider](../api/core/staticprovider)[]) => Promise<[NgModuleRef](../api/core/ngmoduleref)>` When you pass an `[NgModuleFactory](../api/core/ngmodulefactory)`, `[downgradeModule](../api/upgrade/static/downgrademodule)()` uses it to instantiate the module using [platformBrowser](../api/platform-browser/platformbrowser)'s [bootstrapModuleFactory()](../api/core/platformref#bootstrapModuleFactory), which is compatible with ahead-of-time (AOT) compilation. AOT compilation helps make your applications load faster For more about AOT and how to create an `[NgModuleFactory](../api/core/ngmodulefactory)`, see the [Ahead-of-Time Compilation](aot-compiler) guide. Alternatively, you can pass a plain function, which is expected to return a promise resolving to an [NgModuleRef](../api/core/ngmoduleref) (that is, an instance of your Angular module). The function is called with an array of extra [Providers](../api/core/staticprovider) that are expected to be available on the returned `[NgModuleRef](../api/core/ngmoduleref)`'s [Injector](../api/core/injector). For example, if you are using [platformBrowser](../api/platform-browser/platformbrowser) or [platformBrowserDynamic](../api/platform-browser-dynamic/platformbrowserdynamic), you can pass the `extraProviders` array to them: ``` const bootstrapFn = (extraProviders: StaticProvider[]) => { const platformRef = platformBrowserDynamic(extraProviders); return platformRef.bootstrapModule(MainAngularModule); }; // or const bootstrapFn = (extraProviders: StaticProvider[]) => { const platformRef = platformBrowser(extraProviders); return platformRef.bootstrapModuleFactory(MainAngularModuleFactory); }; ``` Using an `[NgModuleFactory](../api/core/ngmodulefactory)` requires less boilerplate and is a good default option as it supports AOT out-of-the-box. Using a custom function requires slightly more code, but gives you greater flexibility. #### Instantiating the Angular module on-demand Another key difference between `[downgradeModule](../api/upgrade/static/downgrademodule)()` and `[UpgradeModule](../api/upgrade/static/upgrademodule)` is that the latter requires you to instantiate both the AngularJS and Angular modules up-front. This means that you have to pay the cost of instantiating the Angular part of the app, even if you don't use any Angular assets until later. `[downgradeModule](../api/upgrade/static/downgrademodule)()` is again less aggressive. It will only instantiate the Angular part when it is required for the first time; that is, as soon as it needs to create a downgraded component. You could go a step further and not even download the code for the Angular part of the application to the user's browser until it is needed. This is especially useful when you use Angular on parts of the hybrid application that are not necessary for the initial rendering or that the user doesn't reach. A few examples are: * You use Angular on specific routes only and you don't need it until/if a user visits such a route. * You use Angular for features that are only visible to specific types of users; for example, logged-in users, administrators, or VIP members. You don't need to load Angular until a user is authenticated. * You use Angular for a feature that is not critical for the initial rendering of the application and you can afford a small delay in favor of better initial load performance. ### Bootstrapping with `[downgradeModule](../api/upgrade/static/downgrademodule)()` As you might have guessed, you don't need to change anything in the way you bootstrap your existing AngularJS application. Unlike `[UpgradeModule](../api/upgrade/static/upgrademodule)`—which requires some extra steps— `[downgradeModule](../api/upgrade/static/downgrademodule)()` is able to take care of bootstrapping the Angular module, as long as you provide the recipe. In order to start using any `[upgrade/static](../api/upgrade/static)` APIs, you still need to load the Angular framework as you would in a normal Angular application. You can see how this can be done with SystemJS by following the instructions in the [Upgrade Setup](upgrade-setup "Setup for Upgrading from AngularJS") guide, selectively copying code from the [QuickStart GitHub repository](https://github.com/angular/quickstart). You also need to install the `@angular/upgrade` package using `npm install @angular/upgrade --save` and add a mapping for the `@angular/upgrade/[static](../api/upgrade/static)` package: ``` '@angular/upgrade/static': 'npm:@angular/upgrade/fesm2015/static.mjs', ``` Next, create an `app.module.ts` file and add the following `[NgModule](../api/core/ngmodule)` class: ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; @NgModule({ imports: [ BrowserModule ] }) export class MainAngularModule { // Empty placeholder method to satisfy the `Compiler`. ngDoBootstrap() {} } ``` This bare minimum `[NgModule](../api/core/ngmodule)` imports `[BrowserModule](../api/platform-browser/browsermodule)`, the module every Angular browser-based app must have. It also defines an empty `ngDoBootstrap()` method, to prevent the [Compiler](../api/core/compiler) from returning errors. This is necessary because the module will not have a `bootstrap` declaration on its `[NgModule](../api/core/ngmodule)` decorator. > You do not add a `bootstrap` declaration to the `[NgModule](../api/core/ngmodule)` decorator since AngularJS owns the root template of the application and `ngUpgrade` bootstraps the necessary components. > > You can now link the AngularJS and Angular modules together using `[downgradeModule](../api/upgrade/static/downgrademodule)()`. ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { downgradeModule } from '@angular/upgrade/static'; const bootstrapFn = (extraProviders: StaticProvider[]) => { const platformRef = platformBrowserDynamic(extraProviders); return platformRef.bootstrapModule(MainAngularModule); }; const downgradedModule = downgradeModule(bootstrapFn); angular.module('mainAngularJsModule', [ downgradedModule ]); ``` The existing AngularJS code works as before *and* you are ready to start adding Angular code. ### Using Components and Injectables The differences between `[downgradeModule](../api/upgrade/static/downgrademodule)()` and `[UpgradeModule](../api/upgrade/static/upgrademodule)` end here. The rest of the `[upgrade/static](../api/upgrade/static)` APIs and concepts work in the exact same way for both types of hybrid applications. See [Upgrading from AngularJS](upgrade) to learn about: * [Using Angular Components from AngularJS Code](upgrade#using-angular-components-from-angularjs-code). **NOTE**: If you are downgrading multiple modules, you need to specify the name of the downgraded module each component belongs to, when calling `[downgradeComponent](../api/upgrade/static/downgradecomponent)()`. * [Using AngularJS Component Directives from Angular Code](upgrade#using-angularjs-component-directives-from-angular-code). * [Projecting AngularJS Content into Angular Components](upgrade#projecting-angularjs-content-into-angular-components). * [Transcluding Angular Content into AngularJS Component Directives](upgrade#transcluding-angular-content-into-angularjs-component-directives). * [Making AngularJS Dependencies Injectable to Angular](upgrade#making-angularjs-dependencies-injectable-to-angular). * [Making Angular Dependencies Injectable to AngularJS](upgrade#making-angular-dependencies-injectable-to-angularjs). **NOTE**: If you are downgrading multiple modules, you need to specify the name of the downgraded module each injectable belongs to, when calling `[downgradeInjectable](../api/upgrade/static/downgradeinjectable)()`. > While it is possible to downgrade injectables, downgraded injectables will not be available until the Angular module that provides them is instantiated. In order to be safe, you need to ensure that the downgraded injectables are not used anywhere *outside* the part of the application where it is guaranteed that their module has been instantiated. > > For example, it is *OK* to use a downgraded service in an upgraded component that is only used from a downgraded Angular component provided by the same Angular module as the injectable, but it is *not OK* to use it in an AngularJS component that may be used independently of Angular or use it in a downgraded Angular component from a different module. > > Using ahead-of-time compilation with hybrid apps ------------------------------------------------ You can take advantage of ahead-of-time (AOT) compilation in hybrid applications just like in any other Angular application. The setup for a hybrid application is mostly the same as described in the [Ahead-of-Time Compilation](aot-compiler) guide save for differences in `index.html` and `main-aot.ts`. AOT needs to load any AngularJS files that are in the `<script>` tags in the AngularJS `index.html`. An easy way to copy them is to add each to the `copy-dist-files.js` file. You also need to pass the generated `MainAngularModuleFactory` to `[downgradeModule](../api/upgrade/static/downgrademodule)()` instead of the custom bootstrap function: ``` import { downgradeModule } from '@angular/upgrade/static'; import { MainAngularModuleNgFactory } from '../aot/app/app.module.ngfactory'; const downgradedModule = downgradeModule(MainAngularModuleNgFactory); angular.module('mainAngularJsModule', [ downgradedModule ]); ``` And that is all you need to do to get the full benefit of AOT for hybrid Angular applications. Conclusion ---------- This page covered how to use the [upgrade/static](../api/upgrade/static) package to incrementally upgrade existing AngularJS applications at your own pace and without impeding further development of the app for the duration of the upgrade process. Specifically, this guide showed how you can achieve better performance and greater flexibility in your hybrid applications by using [downgradeModule()](../api/upgrade/static/downgrademodule) instead of [UpgradeModule](../api/upgrade/static/upgrademodule). To summarize, the key differentiating factors of `[downgradeModule](../api/upgrade/static/downgrademodule)()` are: 1. It allows instantiating or even loading the Angular part lazily, which improves the initial loading time. In some cases this may waive the cost of running a second framework altogether. 2. It improves performance by avoiding unnecessary change detection runs while giving the developer greater ability to customize. 3. It does not require you to change how you bootstrap your AngularJS application. Using `[downgradeModule](../api/upgrade/static/downgrademodule)()` is a good option for hybrid applications when you want to keep the AngularJS and Angular parts less coupled. You can still mix and match components and services from both frameworks, but you might need to manually propagate change detection. In return, `[downgradeModule](../api/upgrade/static/downgrademodule)()` offers more control and better performance. Last reviewed on Mon Feb 28 2022 angular Angular workspace configuration Angular workspace configuration =============================== The `angular.json` file at the root level of an Angular [workspace](glossary#workspace) provides workspace-wide and project-specific configuration defaults. These are used for build and development tools provided by the Angular CLI. Path values given in the configuration are relative to the root workspace directory. General JSON structure ---------------------- At the top-level of `angular.json`, a few properties configure the workspace and a `projects` section contains the remaining per-project configuration options. You can override Angular CLI defaults set at the workspace level through defaults set at the project level. You can also override defaults set at the project level using the command line. The following properties, at the top-level of the file, configure the workspace. | Properties | Details | | --- | --- | | `version` | The configuration-file version. | | `newProjectRoot` | Path where new projects are created. Absolute or relative to the workspace directory. | | `cli` | A set of options that customize the [Angular CLI](cli). See the [Angular CLI configuration options](workspace-config#cli-configuration-options) section. | | `schematics` | A set of [schematics](glossary#schematic) that customize the `ng generate` sub-command option defaults for this workspace. See the [Generation schematics](workspace-config#schematics) section. | | `projects` | Contains a subsection for each library or application in the workspace, with the per-project configuration options. | The initial application that you create with `ng new app_name` is listed under "projects": ``` "projects": { "app_name": { … } … } ``` When you create a library project with `ng generate library`, the library project is also added to the `projects` section. > **NOTE**: The `projects` section of the configuration file does not correspond exactly to the workspace file structure. > > * The initial application created by `ng new` is at the top level of the workspace file structure > * Other applications and libraries go into a `projects` directory in the workspace > > For more information, see [Workspace and project file structure](file-structure). > > Angular CLI configuration options --------------------------------- The following configuration properties are a set of options that customize the Angular CLI. | Property | Details | Value type | | --- | --- | --- | | `analytics` | Share anonymous [usage data](cli/analytics) with the Angular Team. | `boolean` | `ci` | | `cache` | Control [persistent disk cache](cli/cache) used by [Angular CLI Builders](cli-builder). | [Cache options](workspace-config#cache-options) | | `schematicCollections` | A list of default schematics collections to use. | `string[]` | | `packageManager` | The preferred package manager tool to use. | `npm` | `cnpm` | `pnpm` |`yarn` | | `warnings` | Control Angular CLI specific console warnings. | [Warnings options](workspace-config#warnings-options) | ### Cache options | Property | Details | Value type | Default value | | --- | --- | --- | --- | | `enabled` | Configure whether disk caching is enabled. | `boolean` | `true` | | `environment` | Configure in which environment disk cache is enabled. | `local` | `ci` | `all` | `local` | | `path` | The directory used to stored cache results. | `string` | `.angular/cache` | ### Warnings options | Property | Details | Value type | Default value | | --- | --- | --- | --- | | `versionMismatch` | Show a warning when the global Angular CLI version is newer than the local one. | `boolean` | `true` | Project configuration options ----------------------------- The following top-level configuration properties are available for each project, under `projects:<project_name>`. ``` "my-app": { "root": "", "sourceRoot": "src", "projectType": "application", "prefix": "app", "schematics": {}, "architect": {} } ``` | Property | Details | | --- | --- | | `root` | The root directory for this project's files, relative to the workspace directory. Empty for the initial application, which resides at the top level of the workspace. | | `sourceRoot` | The root directory for this project's source files. | | `projectType` | One of "application" or "library" An application can run independently in a browser, while a library cannot. | | `prefix` | A string that Angular prepends to created selectors. Can be customized to identify an application or feature area. | | `schematics` | A set of schematics that customize the `ng generate` sub-command option defaults for this project. See the [Generation schematics](workspace-config#schematics) section. | | `architect` | Configuration defaults for Architect builder targets for this project. | Generation schematics --------------------- Angular generation [schematics](glossary#schematic) are instructions for modifying a project by adding files or modifying existing files. Individual schematics for the default Angular CLI `ng generate` sub-commands are collected in the package `@schematics/angular`. Specify the schematic name for a subcommand in the format `schematic-package:schematic-name`; for example, the schematic for generating a component is `@schematics/angular:component`. The JSON schemas for the default schematics used by the Angular CLI to create projects and parts of projects are collected in the package [`@schematics/angular`](https://github.com/angular/angular-cli/blob/main/packages/schematics/angular/application/schema.json). The schema describes the options available to the Angular CLI for each of the `ng generate` sub-commands, as shown in the `--help` output. The fields given in the schema correspond to the allowed argument values and defaults for the Angular CLI sub-command options. You can update your workspace schema file to set a different default for a sub-command option. Project tool configuration options ---------------------------------- Architect is the tool that the Angular CLI uses to perform complex tasks, such as compilation and test running. Architect is a shell that runs a specified [builder](glossary#builder) to perform a given task, according to a [target](glossary#target) configuration. You can define and configure new builders and targets to extend the Angular CLI. See [Angular CLI Builders](cli-builder). ### Default Architect builders and targets Angular defines default builders for use with specific commands, or with the general `ng run` command. The JSON schemas that define the options and defaults for each of these default builders are collected in the [`@angular-devkit/build-angular`](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/build_angular/builders.json) package. The schemas configure options for the following builders. * [app-shell](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/build_angular/src/builders/app-shell/schema.json) * [browser](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/build_angular/src/builders/browser/schema.json) * [dev-server](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/build_angular/src/builders/dev-server/schema.json) * [extract-i18n](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/build_angular/src/builders/extract-i18n/schema.json) * [karma](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/build_angular/src/builders/karma/schema.json) * [server](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/build_angular/src/builders/server/schema.json) ### Configuring builder targets The `architect` section of `angular.json` contains a set of Architect targets. Many of the targets correspond to the Angular CLI commands that run them. Some extra predefined targets can be run using the `ng run` command, and you can define your own targets. Each target object specifies the `builder` for that target, which is the npm package for the tool that Architect runs. Each target also has an `options` section that configures default options for the target, and a `configurations` section that names and specifies alternative configurations for the target. See the example in [Build target](workspace-config#build-target) below. ``` "architect": { "build": {}, "serve": {}, "e2e" : {}, "test": {}, "lint": {}, "extract-i18n": {}, "server": {}, "app-shell": {} } ``` | Sections | Details | | --- | --- | | `architect/build` | Configures defaults for options of the `ng build` command. See the [Build target](workspace-config#build-target) section for more information. | | `architect/serve` | Overrides build defaults and supplies extra serve defaults for the `ng serve` command. Besides the options available for the `ng build` command, it adds options related to serving the application. | | `architect/e2e` | Overrides build-option defaults for building end-to-end testing applications using the `ng e2e` command. | | `architect/test` | Overrides build-option defaults for test builds and supplies extra test-running defaults for the `ng test` command. | | `architect/lint` | Configures defaults for options of the `ng lint` command, which performs code analysis on project source files. | | `architect/extract-i18n` | Configures defaults for options of the `ng extract-i18n` command, which extracts marked message strings from source code and outputs translation files. | | `architect/server` | Configures defaults for creating a Universal application with server-side rendering, using the `ng run <project>:server` command. | | `architect/app-shell` | Configures defaults for creating an application shell for a progressive web application (PWA), using the `ng run <project>:app-shell` command. | In general, the options for which you can configure defaults correspond to the command options listed in the [Angular CLI reference page](cli) for each command. > **NOTE**: All options in the configuration file must use [camelCase](glossary#case-conventions), rather than dash-case. > > Build target ------------ The `architect/build` section configures defaults for options of the `ng build` command. It has the following top-level properties. | PROPERTY | Details | | --- | --- | | `builder` | The npm package for the build tool used to create this target. The default builder for an application (`ng build myApp`) is `@angular-devkit/build-angular:[browser](../api/animations/browser)`, which uses the [webpack](https://webpack.js.org) package bundler. **NOTE**: A different builder is used for building a library (`ng build myLib`). | | `options` | This section contains default build target options, used when no named alternative configuration is specified. See the [Default build targets](workspace-config#default-build-targets) section. | | `configurations` | This section defines and names alternative configurations for different intended destinations. It contains a section for each named configuration, which sets the default options for that intended environment. See the [Alternate build configurations](workspace-config#build-configs) section. | ### Alternate build configurations Angular CLI comes with two build configurations: `production` and `development`. By default, the `ng build` command uses the `production` configuration, which applies several build optimizations, including: * Bundling files * Minimizing excess whitespace * Removing comments and dead code * Rewriting code to use short, mangled names, also known as minification You can define and name extra alternate configurations (such as `stage`, for instance) appropriate to your development process. Some examples of different build configurations are `stable`, `archive`, and `next` used by Angular.io itself, and the individual locale-specific configurations required for building localized versions of an application. For details, see [Internationalization (i18n)](i18n-common-merge "Common Internationalization task #6: Merge translations into the application | Angular"). You can select an alternate configuration by passing its name to the `--configuration` command line flag. You can also pass in more than one configuration name as a comma-separated list. For example, to apply both `stage` and `fr` build configurations, use the command `ng build --configuration stage,fr`. In this case, the command parses the named configurations from left to right. If multiple configurations change the same setting, the last-set value is the final one. In this example, if both `stage` and `fr` configurations set the output path the value in `fr` would get used. ### Extra build and test options The configurable options for a default or targeted build generally correspond to the options available for the [`ng build`](cli/build), [`ng serve`](cli/serve), and [`ng test`](cli/test) commands. For details of those options and their possible values, see the [Angular CLI Reference](cli). Some extra options can only be set through the configuration file, either by direct editing or with the [`ng config`](cli/config) command. | Options properties | Details | | --- | --- | | `assets` | An object containing paths to static assets to add to the global context of the project. The default paths point to the project's icon file and its `assets` directory. See more in the [Assets configuration](workspace-config#asset-config) section. | | `styles` | An array of style files to add to the global context of the project. Angular CLI supports CSS imports and all major CSS preprocessors: [sass/scss](https://sass-lang.com) and [less](http://lesscss.org). See more in the [Styles and scripts configuration](workspace-config#style-script-config) section. | | `stylePreprocessorOptions` | An object containing option-value pairs to pass to style preprocessors. See more in the [Styles and scripts configuration](workspace-config#style-script-config) section. | | `scripts` | An object containing JavaScript script files to add to the global context of the project. The scripts are loaded exactly as if you had added them in a `<script>` tag inside `index.html`. See more in the [Styles and scripts configuration](workspace-config#style-script-config) section. | | `budgets` | Default size-budget type and thresholds for all or parts of your application. You can configure the builder to report a warning or an error when the output reaches or exceeds a threshold size. See [Configure size budgets](build#configure-size-budgets). (Not available in `test` section.) | | `fileReplacements` | An object containing files and their compile-time replacements. See more in [Configure target-specific file replacements](build#configure-target-specific-file-replacements). | Complex configuration values ---------------------------- The `assets`, `styles`, and `scripts` options can have either simple path string values, or object values with specific fields. The `sourceMap` and `optimization` options can be set to a simple Boolean value with a command flag. They can also be given a complex value using the configuration file. The following sections provide more details of how these complex values are used in each case. ### Assets configuration Each `build` target configuration can include an `assets` array that lists files or folders you want to copy as-is when building your project. By default, the `src/assets/` directory and `src/favicon.ico` are copied over. ``` "assets": [ "src/assets", "src/favicon.ico" ] ``` To exclude an asset, you can remove it from the assets configuration. You can further configure assets to be copied by specifying assets as objects, rather than as simple paths relative to the workspace root. An asset specification object can have the following fields. | Fields | Details | | --- | --- | | `glob` | A [node-glob](https://github.com/isaacs/node-glob/blob/master/README.md) using `input` as base directory. | | `input` | A path relative to the workspace root. | | `output` | A path relative to `outDir` (default is `dist/project-name`). Because of the security implications, the Angular CLI never writes files outside of the project output path. | | `ignore` | A list of globs to exclude. | | `followSymlinks` | Allow glob patterns to follow symlink directories. This allows subdirectories of the symlink to be searched. Defaults to `false`. | For example, the default asset paths can be represented in more detail using the following objects. ``` "assets": [ { "glob": "**/*", "input": "src/assets/", "output": "/assets/" }, { "glob": "favicon.ico", "input": "src/", "output": "/" } ] ``` You can use this extended configuration to copy assets from outside your project. For example, the following configuration copies assets from a node package: ``` "assets": [ { "glob": "**/*", "input": "./node_modules/some-package/images", "output": "/some-package/" } ] ``` The contents of `node_modules/some-package/images/` will be available in `dist/some-package/`. The following example uses the `ignore` field to exclude certain files in the assets directory from being copied into the build: ``` "assets": [ { "glob": "**/*", "input": "src/assets/", "ignore": ["**/*.svg"], "output": "/assets/" } ] ``` ### Styles and scripts configuration An array entry for the `styles` and `scripts` options can be a simple path string, or an object that points to an extra entry-point file. The associated builder loads that file and its dependencies as a separate bundle during the build. With a configuration object, you have the option of naming the bundle for the entry point, using a `bundleName` field. The bundle is injected by default, but you can set `inject` to `false` to exclude the bundle from injection. For example, the following object values create and name a bundle that contains styles and scripts, and excludes it from injection: ``` "styles": [ { "input": "src/external-module/styles.scss", "inject": false, "bundleName": "external-module" } ], "scripts": [ { "input": "src/external-module/main.js", "inject": false, "bundleName": "external-module" } ] ``` You can mix simple and complex file references for styles and scripts. ``` "styles": [ "src/styles.css", "src/more-styles.css", { "input": "src/lazy-style.scss", "inject": false }, { "input": "src/pre-rename-style.scss", "bundleName": "renamed-style" }, ] ``` #### Style preprocessor options In Sass you can make use of the `includePaths` feature for both component and global styles. This allows you to add extra base paths that are checked for imports. To add paths, use the `stylePreprocessorOptions` option: ``` "stylePreprocessorOptions": { "includePaths": [ "src/style-paths" ] } ``` Files in that directory, such as `src/style-paths/_variables.scss`, can be imported from anywhere in your project without the need for a relative path: ``` // src/app/app.component.scss // A relative path works @import '../style-paths/variables'; // But now this works as well @import 'variables'; ``` > **NOTE**: You also need to add any styles or scripts to the `test` builder if you need them for unit tests. See also [Using runtime-global libraries inside your application](using-libraries#using-runtime-global-libraries-inside-your-app). > > ### Optimization configuration The `optimization` browser builder option can be either a Boolean or an Object for more fine-tune configuration. This option enables various optimizations of the build output, including: * Minification of scripts and styles * Tree-shaking * Dead-code elimination * Inlining of critical CSS * Fonts inlining Several options can be used to fine-tune the optimization of an application. | Options | Details | Value type | Default value | | --- | --- | --- | --- | | `scripts` | Enables optimization of the scripts output. | `boolean` | `true` | | `styles` | Enables optimization of the styles output. | `boolean` | [Styles optimization options](workspace-config#styles-optimization-options) | `true` | | `fonts` | Enables optimization for fonts. **NOTE**: This requires internet access. | `boolean` | [Fonts optimization options](workspace-config#fonts-optimization-options) | `true` | #### Styles optimization options | Options | Details | Value type | Default value | | --- | --- | --- | --- | | `minify` | Minify CSS definitions by removing extraneous whitespace and comments, merging identifiers, and minimizing values. | `boolean` | `true` | | `inlineCritical` | Extract and inline critical CSS definitions to improve [First Contentful Paint](https://web.dev/first-contentful-paint). | `boolean` | `true` | #### Fonts optimization options | Options | Details | Value type | Default value | | --- | --- | --- | --- | | `inline` | Reduce [render blocking requests](https://web.dev/render-blocking-resources) by inlining external Google Fonts and Adobe Fonts CSS definitions in the application's HTML index file. **NOTE**: This requires internet access. | `boolean` | `true` | You can supply a value such as the following to apply optimization to one or the other: ``` "optimization": { "scripts": true, "styles": { "minify": true, "inlineCritical": true }, "fonts": true } ``` > For [Universal](glossary#universal), you can reduce the code rendered in the HTML page by setting styles optimization to `true`. > > ### Source map configuration The `sourceMap` browser builder option can be either a Boolean or an Object for more fine-tune configuration to control the source maps of an application. | Options | Details | Value type | Default value | | --- | --- | --- | --- | | `scripts` | Output source maps for all scripts. | `boolean` | `true` | | `styles` | Output source maps for all styles. | `boolean` | `true` | | `vendor` | Resolve vendor packages source maps. | `boolean` | `false` | | `hidden` | Output source maps used for error reporting tools. | `boolean` | `false` | The example below shows how to toggle one or more values to configure the source map outputs: ``` "sourceMap": { "scripts": true, "styles": false, "hidden": true, "vendor": true } ``` > When using hidden source maps, source maps are not referenced in the bundle. These are useful if you only want source maps to map error stack traces in error reporting tools. Hidden source maps don't expose your source maps in the browser developer tools. > > Last reviewed on Mon Feb 28 2022
programming_docs
angular Building a template-driven form Building a template-driven form =============================== This tutorial shows you how to create a template-driven form. The control elements in the form are bound to data properties that have input validation. The input validation helps maintain data integrity and styling to improve the user experience. Template-driven forms use [two-way data binding](architecture-components#data-binding "Intro to 2-way data binding") to update the data model in the component as changes are made in the template and vice versa. > Angular supports two design approaches for interactive forms. You can build forms by using Angular [template syntax and directives](glossary#template "Definition of template terms") to write templates with the form-specific directives. This tutorial describes the directives and techniques to use when writing templates. You can also use a reactive or model-driven approach to build forms. > > Template-driven forms are suitable for small or simple forms, while reactive forms are more scalable and suitable for complex forms. For a comparison of the two approaches, see [Introduction to Forms](forms-overview "Overview of Angular forms.") > > You can build almost any kind of form with an Angular template —login forms, contact forms, and pretty much any business form. You can lay out the controls creatively and bind them to the data in your object model. You can specify validation rules and display validation errors, conditionally allow input from specific controls, trigger built-in visual feedback, and much more. This tutorial shows you how to build a simplified form like the one from the [Tour of Heroes tutorial](../tutorial/tour-of-heroes "Tour of Heroes") to illustrate the techniques. > Run or download the example application: live example. > > Objectives ---------- This tutorial teaches you how to do the following: * Build an Angular form with a component and template * Use `[ngModel](../api/forms/ngmodel)` to create two-way data bindings for reading and writing input-control values * Provide visual feedback using special CSS classes that track the state of the controls * Display validation errors to users and conditionally allow input from form controls based on the form status * Share information across HTML elements using [template reference variables](template-reference-variables) Prerequisites ------------- Before going further into template-driven forms, you should have a basic understanding of the following. * [TypeScript](https://www.typescriptlang.org/ "The TypeScript language") and HTML5 programming * Angular application-design fundamentals, as described in [Angular Concepts](architecture "Introduction to Angular concepts") * The basics of [Angular template syntax](template-syntax "Template syntax guide") * The form-design concepts that are presented in [Introduction to Forms](forms-overview "Overview of Angular forms") Build a template-driven form ---------------------------- Template-driven forms rely on directives defined in the `[FormsModule](../api/forms/formsmodule)`. | Directives | Details | | --- | --- | | `[NgModel](../api/forms/ngmodel)` | Reconciles value changes in the attached form element with changes in the data model, allowing you to respond to user input with input validation and error handling. | | `[NgForm](../api/forms/ngform)` | Creates a top-level `[FormGroup](../api/forms/formgroup)` instance and binds it to a `<form>` element to track aggregated form value and validation status. As soon as you import `[FormsModule](../api/forms/formsmodule)`, this directive becomes active by default on all `<form>` tags. You don't need to add a special selector. | | `[NgModelGroup](../api/forms/ngmodelgroup)` | Creates and binds a `[FormGroup](../api/forms/formgroup)` instance to a DOM element. | ### The sample application The sample form in this guide is used by the *Hero Employment Agency* to maintain personal information about heroes. Every hero needs a job. This form helps the agency match the right hero with the right crisis. The form highlights some design features that make it easier to use. For instance, the two required fields have a green bar on the left to make them easy to spot. These fields have initial values, so the form is valid and the **Submit** button is enabled. Working with this form shows you: * How to include validation logic * How to customize the presentation with standard CSS * How to handle error conditions to ensure valid input If the user deletes the hero name, for example, the form becomes not valid. The application detects the changed status, and displays a validation error in an attention-grabbing style. The **Submit** button is not enabled, and the "required" bar to the left of the input control changes from green to red. ### Step overview In the course of this tutorial, you bind a sample form to data and handle user input using the following steps. 1. Build the basic form. * Define a sample data model * Include required infrastructure such as the `[FormsModule](../api/forms/formsmodule)` 2. Bind form controls to data properties using the `[ngModel](../api/forms/ngmodel)` directive and two-way data-binding syntax. * Examine how `[ngModel](../api/forms/ngmodel)` reports control states using CSS classes * Name controls to make them accessible to `[ngModel](../api/forms/ngmodel)` 3. Track input validity and control status using `[ngModel](../api/forms/ngmodel)`. * Add custom CSS to provide visual feedback on the status * Show and hide validation-error messages 4. Respond to a native HTML button-click event by adding to the model data. 5. Handle form submission using the [`ngSubmit`](../api/forms/ngform#properties) output property of the form. * Disable the **Submit** button until the form is valid * After submit, swap out the finished form for different content on the page Build the form -------------- You can recreate the sample application from the code provided here, or you can examine or download the live example. 1. The provided sample application creates the `Hero` class which defines the data model reflected in the form. ``` export class Hero { constructor( public id: number, public name: string, public power: string, public alterEgo?: string ) { } } ``` 2. The form layout and details are defined in the `HeroFormComponent` class. ``` import { Component } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'app-hero-form', templateUrl: './hero-form.component.html', styleUrls: ['./hero-form.component.css'] }) export class HeroFormComponent { powers = ['Really Smart', 'Super Flexible', 'Super Hot', 'Weather Changer']; model = new Hero(18, 'Dr. IQ', this.powers[0], 'Chuck Overstreet'); submitted = false; onSubmit() { this.submitted = true; } } ``` The component's `selector` value of "app-hero-form" means you can drop this form in a parent template using the `<app-hero-form>` tag. 3. The following code creates a new hero instance, so that the initial form can show an example hero. ``` const myHero = new Hero(42, 'SkyDog', 'Fetch any object at any distance', 'Leslie Rollover'); console.log('My hero is called ' + myHero.name); // "My hero is called SkyDog" ``` This demo uses dummy data for `model` and `powers`. In a real app, you would inject a data service to get and save real data, or expose these properties as inputs and outputs. 4. The application enables the Forms feature and registers the created form component. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { HeroFormComponent } from './hero-form/hero-form.component'; @NgModule({ imports: [ BrowserModule, CommonModule, FormsModule ], declarations: [ AppComponent, HeroFormComponent ], providers: [], bootstrap: [ AppComponent ] }) export class AppModule { } ``` 5. The form is displayed in the application layout defined by the root component's template. ``` <app-hero-form></app-hero-form> ``` The initial template defines the layout for a form with two form groups and a submit button. The form groups correspond to two properties of the Hero data model, name and alterEgo. Each group has a label and a box for user input. * The **Name** `<input>` control element has the HTML5 `required` attribute * The **Alter Ego** `<input>` control element does not because `alterEgo` is optionalThe **Submit** button has some classes on it for styling. At this point, the form layout is all plain HTML5, with no bindings or directives. 6. The sample form uses some style classes from [Twitter Bootstrap](https://getbootstrap.com/css): `container`, `form-group`, `form-control`, and `btn`. To use these styles, the application's style sheet imports the library. ``` @import url('https://unpkg.com/[email protected]/dist/css/bootstrap.min.css'); ``` 7. The form makes the hero applicant choose one superpower from a fixed list of agency-approved powers. The predefined list of `powers` is part of the data model, maintained internally in `HeroFormComponent`. The Angular [NgForOf directive](../api/common/ngforof "API reference") iterates over the data values to populate the `<select>` element. ``` <div class="form-group"> <label for="power">Hero Power</label> <select class="form-control" id="power" required> <option *ngFor="let pow of powers" [value]="pow">{{pow}}</option> </select> </div> ``` If you run the application right now, you see the list of powers in the selection control. The input elements are not yet bound to data values or events, so they are still blank and have no behavior. Bind input controls to data properties -------------------------------------- The next step is to bind the input controls to the corresponding `Hero` properties with two-way data binding, so that they respond to user input by updating the data model, and also respond to programmatic changes in the data by updating the display. The `[ngModel](../api/forms/ngmodel)` directive declared in the `[FormsModule](../api/forms/formsmodule)` lets you bind controls in your template-driven form to properties in your data model. When you include the directive using the syntax for two-way data binding, `[([ngModel](../api/forms/ngmodel))]`, Angular can track the value and user interaction of the control and keep the view synced with the model. 1. Edit the template file `hero-form.component.html`. 2. Find the `<input>` tag next to the **Name** label. 3. Add the `[ngModel](../api/forms/ngmodel)` directive, using two-way data binding syntax `[([ngModel](../api/forms/ngmodel))]="..."`. ``` <input type="text" class="form-control" id="name" required [(ngModel)]="model.name" name="name"> TODO: remove this: {{model.name}} ``` > This example has a temporary diagnostic interpolation after each input tag, `{{model.name}}`, to show the current data value of the corresponding property. The comment reminds you to remove the diagnostic lines when you have finished observing the two-way data binding at work. > > ### Access the overall form status When you imported the `[FormsModule](../api/forms/formsmodule)` in your component, Angular automatically created and attached an [NgForm](../api/forms/ngform "API reference for NgForm") directive to the `<form>` tag in the template (because `[NgForm](../api/forms/ngform)` has the selector `form` that matches `<form>` elements). To get access to the `[NgForm](../api/forms/ngform)` and the overall form status, declare a [template reference variable](template-reference-variables). 1. Edit the template file `hero-form.component.html`. 2. Update the `<form>` tag with a template reference variable, `#heroForm`, and set its value as follows. ``` <form #heroForm="ngForm"> ``` The `heroForm` template variable is now a reference to the `[NgForm](../api/forms/ngform)` directive instance that governs the form as a whole. 3. Run the app. 4. Start typing in the **Name** input box. As you add and delete characters, you can see them appear and disappear from the data model. For example: The diagnostic line that shows interpolated values demonstrates that values are really flowing from the input box to the model and back again. ### Naming control elements When you use `[([ngModel](../api/forms/ngmodel))]` on an element, you must define a `name` attribute for that element. Angular uses the assigned name to register the element with the `[NgForm](../api/forms/ngform)` directive attached to the parent `<form>` element. The example added a `name` attribute to the `<input>` element and set it to "name", which makes sense for the hero's name. Any unique value will do, but using a descriptive name is helpful. 1. Add similar `[([ngModel](../api/forms/ngmodel))]` bindings and `name` attributes to **Alter Ego** and **Hero Power**. 2. You can now remove the diagnostic messages that show interpolated values. 3. To confirm that two-way data binding works for the entire hero model, add a new text binding with the [`json`](../api/common/jsonpipe) pipe at the top to the component's template, which serializes the data to a string. After these revisions, the form template should look like the following: ``` {{ model | json }} <div class="form-group"> <label for="name">Name</label> <input type="text" class="form-control" id="name" required [(ngModel)]="model.name" name="name"> </div> <div class="form-group"> <label for="alterEgo">Alter Ego</label> <input type="text" class="form-control" id="alterEgo" [(ngModel)]="model.alterEgo" name="alterEgo"> </div> <div class="form-group"> <label for="power">Hero Power</label> <select class="form-control" id="power" required [(ngModel)]="model.power" name="power"> <option *ngFor="let pow of powers" [value]="pow">{{pow}}</option> </select> </div> ``` * Notice that each `<input>` element has an `id` property. This is used by the `<label>` element's `for` attribute to match the label to its input control. This is a [standard HTML feature](https://developer.mozilla.org/docs/Web/HTML/Element/label). * Each `<input>` element also has the required `name` property that Angular uses to register the control with the form.If you run the application now and change every hero model property, the form might display like this: The diagnostic near the top of the form confirms that all of your changes are reflected in the model. 4. When you have observed the effects, you can delete the `{{ model | json }}` text binding. Track form states ----------------- Angular applies the `ng-submitted` class to `form` elements after the form has been submitted. This class can be used to change the form's style after it has been submitted. Track control states -------------------- Adding the `[NgModel](../api/forms/ngmodel)` directive to a control adds class names to the control that describe its state. These classes can be used to change a control's style based on its state. The following table describes the class names that Angular applies based on the control's state. | States | Class if true | Class if false | | --- | --- | --- | | The control has been visited. | `ng-touched` | `ng-untouched` | | The control's value has changed. | `ng-dirty` | `ng-pristine` | | The control's value is valid. | `ng-valid` | `ng-invalid` | Angular also applies the `ng-submitted` class to `form` elements upon submission, but not to the controls inside the `form` element. You use these CSS classes to define the styles for your control based on its status. ### Observe control states To see how the classes are added and removed by the framework, open the browser's developer tools and inspect the `<input>` element that represents the hero name. 1. Using your browser's developer tools, find the `<input>` element that corresponds to the **Name** input box. You can see that the element has multiple CSS classes in addition to "form-control". 2. When you first bring it up, the classes indicate that it has a valid value, that the value has not been changed since initialization or reset, and that the control has not been visited since initialization or reset. ``` <input … class="form-control ng-untouched ng-pristine ng-valid" …> ``` 3. Take the following actions on the **Name** `<input>` box, and observe which classes appear. * Look but don't touch. The classes indicate that it is untouched, pristine, and valid. * Click inside the name box, then click outside it. The control has now been visited, and the element has the `ng-touched` class instead of the `ng-untouched` class. * Add slashes to the end of the name. It is now touched and dirty. * Erase the name. This makes the value invalid, so the `ng-invalid` class replaces the `ng-valid` class. ### Create visual feedback for states The `ng-valid`/`ng-invalid` pair is particularly interesting, because you want to send a strong visual signal when the values are invalid. You also want to mark required fields. You can mark required fields and invalid data at the same time with a colored bar on the left of the input box: To change the appearance in this way, take the following steps. 1. Add definitions for the `ng-*` CSS classes. 2. Add these class definitions to a new `forms.css` file. 3. Add the new file to the project as a sibling to `index.html`: ``` .ng-valid[required], .ng-valid.required { border-left: 5px solid #42A948; /* green */ } .ng-invalid:not(form) { border-left: 5px solid #a94442; /* red */ } ``` 4. In the `index.html` file, update the `<head>` tag to include the new style sheet. ``` <link rel="stylesheet" href="assets/forms.css"> ``` ### Show and hide validation error messages The **Name** input box is required and clearing it turns the bar red. That indicates that something is wrong, but the user doesn't know what is wrong or what to do about it. You can provide a helpful message by checking for and responding to the control's state. When the user deletes the name, the form should look like this: The **Hero Power** select box is also required, but it doesn't need this kind of error handling because the selection box already constrains the selection to valid values. To define and show an error message when appropriate, take the following steps. 1. Extend the `<input>` tag with a template reference variable that you can use to access the input box's Angular control from within the template. In the example, the variable is `#name="[ngModel](../api/forms/ngmodel)"`. > The template reference variable (`#name`) is set to `"[ngModel](../api/forms/ngmodel)"` because that is the value of the [`NgModel.exportAs`](../api/core/directive#exportAs) property. This property tells Angular how to link a reference variable to a directive. > > 2. Add a `<div>` that contains a suitable error message. 3. Show or hide the error message by binding properties of the `name` control to the message `<div>` element's `hidden` property. ``` <div [hidden]="name.valid || name.pristine" class="alert alert-danger"> ``` 4. Add a conditional error message to the `name` input box, as in the following example. ``` <label for="name">Name</label> <input type="text" class="form-control" id="name" required [(ngModel)]="model.name" name="name" #name="ngModel"> <div [hidden]="name.valid || name.pristine" class="alert alert-danger"> Name is required </div> ``` In this example, you hide the message when the control is either valid or *pristine*. Pristine means the user hasn't changed the value since it was displayed in this form. If you ignore the `pristine` state, you would hide the message only when the value is valid. If you arrive in this component with a new, blank hero or an invalid hero, you'll see the error message immediately, before you've done anything. You might want the message to display only when the user makes an invalid change. Hiding the message while the control is in the `pristine` state achieves that goal. You'll see the significance of this choice when you add a new hero to the form in the next step. Add a new hero -------------- This exercise shows how you can respond to a native HTML button-click event by adding to the model data. To let form users add a new hero, you will add a **New Hero** button that responds to a click event. 1. In the template, place a "New Hero" `<button>` element at the bottom of the form. 2. In the component file, add the hero-creation method to the hero data model. ``` newHero() { this.model = new Hero(42, '', ''); } ``` 3. Bind the button's click event to a hero-creation method, `newHero()`. ``` <button type="button" class="btn btn-default" (click)="newHero()">New Hero</button> ``` 4. Run the application again and click the **New Hero** button. The form clears, and the *required* bars to the left of the input box are red, indicating invalid `name` and `power` properties. Notice that the error messages are hidden. This is because the form is pristine; you haven't changed anything yet. 5. Enter a name and click **New Hero** again. Now the application displays a `Name is required` error message, because the input box is no longer pristine. The form remembers that you entered a name before clicking **New Hero**. 6. To restore the pristine state of the form controls, clear all of the flags imperatively by calling the form's `reset()` method after calling the `newHero()` method. ``` <button type="button" class="btn btn-default" (click)="newHero(); heroForm.reset()">New Hero</button> ``` Now clicking **New Hero** resets both the form and its control flags. > See the [User Input](user-input) guide for more information about listening for DOM events with an event binding and updating a corresponding component property. > > Submit the form with `ngSubmit` ------------------------------- The user should be able to submit this form after filling it in. The **Submit** button at the bottom of the form does nothing on its own, but it does trigger a form-submit event because of its type (`type="submit"`). To respond to this event, take the following steps. 1. Bind the form's [`ngSubmit`](../api/forms/ngform#properties) event property to the hero-form component's `onSubmit()` method. ``` <form (ngSubmit)="onSubmit()" #heroForm="ngForm"> ``` 2. Use the template reference variable, `#heroForm` to access the form that contains the **Submit** button and create an event binding. You will bind the form property that indicates its overall validity to the **Submit** button's `disabled` property. ``` <button type="submit" class="btn btn-success" [disabled]="!heroForm.form.valid">Submit</button> ``` 3. Run the application. Notice that the button is enabled —although it doesn't do anything useful yet. 4. Delete the **Name** value. This violates the "required" rule, so it displays the error message —and notice that it also disables the **Submit** button. You didn't have to explicitly wire the button's enabled state to the form's validity. The `[FormsModule](../api/forms/formsmodule)` did this automatically when you defined a template reference variable on the enhanced form element, then referred to that variable in the button control. ### Respond to form submission To show a response to form submission, you can hide the data entry area and display something else in its place. 1. Wrap the entire form in a `<div>` and bind its `hidden` property to the `HeroFormComponent.submitted` property. ``` <div [hidden]="submitted"> <h1>Hero Form</h1> <form (ngSubmit)="onSubmit()" #heroForm="ngForm"> <!-- ... all of the form ... --> </form> </div> ``` * The main form is visible from the start because the `submitted` property is false until you submit the form, as this fragment from the `HeroFormComponent` shows: ``` submitted = false; onSubmit() { this.submitted = true; } ``` * When you click the **Submit** button, the `submitted` flag becomes true and the form disappears. 2. To show something else while the form is in the submitted state, add the following HTML below the new `<div>` wrapper. ``` <div [hidden]="!submitted"> <h2>You submitted the following:</h2> <div class="row"> <div class="col-xs-3">Name</div> <div class="col-xs-9">{{ model.name }}</div> </div> <div class="row"> <div class="col-xs-3">Alter Ego</div> <div class="col-xs-9">{{ model.alterEgo }}</div> </div> <div class="row"> <div class="col-xs-3">Power</div> <div class="col-xs-9">{{ model.power }}</div> </div> <br> <button type="button" class="btn btn-primary" (click)="submitted=false">Edit</button> </div> ``` This `<div>`, which shows a read-only hero with interpolation bindings, appears only while the component is in the submitted state. The alternative display includes an *Edit* button whose click event is bound to an expression that clears the `submitted` flag. 3. Click the *Edit* button to switch the display back to the editable form. Summary ------- The Angular form discussed in this page takes advantage of the following framework features to provide support for data modification, validation, and more. * An Angular HTML form template * A form component class with a `@[Component](../api/core/component)` decorator * Handling form submission by binding to the `[NgForm.ngSubmit](../api/forms/ngform#ngSubmit)` event property * Template-reference variables such as `#heroForm` and `#name` * `[([ngModel](../api/forms/ngmodel))]` syntax for two-way data binding * The use of `name` attributes for validation and form-element change tracking * The reference variable's `valid` property on input controls indicates whether a control is valid or should show error messages * Controlling the **Submit** button's enabled state by binding to `[NgForm](../api/forms/ngform)` validity * Custom CSS classes that provide visual feedback to users about controls that are not valid Here's the code for the final version of the application: ``` import { Component } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'app-hero-form', templateUrl: './hero-form.component.html', styleUrls: ['./hero-form.component.css'] }) export class HeroFormComponent { powers = ['Really Smart', 'Super Flexible', 'Super Hot', 'Weather Changer']; model = new Hero(18, 'Dr. IQ', this.powers[0], 'Chuck Overstreet'); submitted = false; onSubmit() { this.submitted = true; } newHero() { this.model = new Hero(42, '', ''); } } ``` ``` <div class="container"> <div [hidden]="submitted"> <h1>Hero Form</h1> <form (ngSubmit)="onSubmit()" #heroForm="ngForm"> <div class="form-group"> <label for="name">Name</label> <input type="text" class="form-control" id="name" required [(ngModel)]="model.name" name="name" #name="ngModel"> <div [hidden]="name.valid || name.pristine" class="alert alert-danger"> Name is required </div> </div> <div class="form-group"> <label for="alterEgo">Alter Ego</label> <input type="text" class="form-control" id="alterEgo" [(ngModel)]="model.alterEgo" name="alterEgo"> </div> <div class="form-group"> <label for="power">Hero Power</label> <select class="form-control" id="power" required [(ngModel)]="model.power" name="power" #power="ngModel"> <option *ngFor="let pow of powers" [value]="pow">{{pow}}</option> </select> <div [hidden]="power.valid || power.pristine" class="alert alert-danger"> Power is required </div> </div> <button type="submit" class="btn btn-success" [disabled]="!heroForm.form.valid">Submit</button> <button type="button" class="btn btn-default" (click)="newHero(); heroForm.reset()">New Hero</button> </form> </div> <div [hidden]="!submitted"> <h2>You submitted the following:</h2> <div class="row"> <div class="col-xs-3">Name</div> <div class="col-xs-9">{{ model.name }}</div> </div> <div class="row"> <div class="col-xs-3">Alter Ego</div> <div class="col-xs-9">{{ model.alterEgo }}</div> </div> <div class="row"> <div class="col-xs-3">Power</div> <div class="col-xs-9">{{ model.power }}</div> </div> <br> <button type="button" class="btn btn-primary" (click)="submitted=false">Edit</button> </div> </div> ``` ``` export class Hero { constructor( public id: number, public name: string, public power: string, public alterEgo?: string ) { } } ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { HeroFormComponent } from './hero-form/hero-form.component'; @NgModule({ imports: [ BrowserModule, CommonModule, FormsModule ], declarations: [ AppComponent, HeroFormComponent ], providers: [], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` <app-hero-form></app-hero-form> ``` ``` import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { } ``` ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule) .catch(err => console.error(err)); ``` ``` .ng-valid[required], .ng-valid.required { border-left: 5px solid #42A948; /* green */ } .ng-invalid:not(form) { border-left: 5px solid #a94442; /* red */ } ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular AOT metadata errors AOT metadata errors =================== The following are metadata errors you may encounter, with explanations and suggested corrections. [Expression form not supported](aot-metadata-errors#expression-form-not-supported) [Reference to a local (non-exported) symbol](aot-metadata-errors#reference-to-a-local-symbol) [Only initialized variables and constants](aot-metadata-errors#only-initialized-variables) [Reference to a non-exported class](aot-metadata-errors#reference-to-a-non-exported-class) [Reference to a non-exported function](aot-metadata-errors#reference-to-a-non-exported-function) [Function calls are not supported](aot-metadata-errors#function-calls-not-supported) [Destructured variable or constant not supported](aot-metadata-errors#destructured-variable-not-supported) [Could not resolve type](aot-metadata-errors#could-not-resolve-type) [Name expected](aot-metadata-errors#name-expected) [Unsupported enum member name](aot-metadata-errors#unsupported-enum-member-name) [Tagged template expressions are not supported](aot-metadata-errors#tagged-template-expressions-not-supported) [Symbol reference expected](aot-metadata-errors#symbol-reference-expected) Expression form not supported ----------------------------- > *The compiler encountered an expression it didn't understand while evaluating Angular metadata.* > > Language features outside of the compiler's [restricted expression syntax](aot-compiler#expression-syntax) can produce this error, as seen in the following example: ``` // ERROR export class Fooish { … } … const prop = typeof Fooish; // typeof is not valid in metadata … // bracket notation is not valid in metadata { provide: 'token', useValue: { [prop]: 'value' } }; … ``` You can use `typeof` and bracket notation in normal application code. You just can't use those features within expressions that define Angular metadata. Avoid this error by sticking to the compiler's [restricted expression syntax](aot-compiler#expression-syntax) when writing Angular metadata and be wary of new or unusual TypeScript features. Reference to a local (non-exported) symbol ------------------------------------------ > *Reference to a local (non-exported) symbol 'symbol name'. Consider exporting the symbol.* > > The compiler encountered a referenced to a locally defined symbol that either wasn't exported or wasn't initialized. Here's a `provider` example of the problem. ``` // ERROR let foo: number; // neither exported nor initialized @Component({ selector: 'my-component', template: … , providers: [ { provide: Foo, useValue: foo } ] }) export class MyComponent {} ``` The compiler generates the component factory, which includes the `useValue` provider code, in a separate module. *That* factory module can't reach back to *this* source module to access the local (non-exported) `foo` variable. You could fix the problem by initializing `foo`. ``` let foo = 42; // initialized ``` The compiler will [fold](aot-compiler#code-folding) the expression into the provider as if you had written this. ``` providers: [ { provide: Foo, useValue: 42 } ] ``` Alternatively, you can fix it by exporting `foo` with the expectation that `foo` will be assigned at runtime when you actually know its value. ``` // CORRECTED export let foo: number; // exported @Component({ selector: 'my-component', template: … , providers: [ { provide: Foo, useValue: foo } ] }) export class MyComponent {} ``` Adding `export` often works for variables referenced in metadata such as `providers` and `animations` because the compiler can generate *references* to the exported variables in these expressions. It doesn't need the *values* of those variables. Adding `export` doesn't work when the compiler needs the *actual value* in order to generate code. For example, it doesn't work for the `template` property. ``` // ERROR export let someTemplate: string; // exported but not initialized @Component({ selector: 'my-component', template: someTemplate }) export class MyComponent {} ``` The compiler needs the value of the `template` property *right now* to generate the component factory. The variable reference alone is insufficient. Prefixing the declaration with `export` merely produces a new error, "[`Only initialized variables and constants can be referenced`](aot-metadata-errors#only-initialized-variables)". Only initialized variables and constants ---------------------------------------- > *Only initialized variables and constants can be referenced because the value of this variable is needed by the template compiler.* > > The compiler found a reference to an exported variable or static field that wasn't initialized. It needs the value of that variable to generate code. The following example tries to set the component's `template` property to the value of the exported `someTemplate` variable which is declared but *unassigned*. ``` // ERROR export let someTemplate: string; @Component({ selector: 'my-component', template: someTemplate }) export class MyComponent {} ``` You'd also get this error if you imported `someTemplate` from some other module and neglected to initialize it there. ``` // ERROR - not initialized there either import { someTemplate } from './config'; @Component({ selector: 'my-component', template: someTemplate }) export class MyComponent {} ``` The compiler cannot wait until runtime to get the template information. It must statically derive the value of the `someTemplate` variable from the source code so that it can generate the component factory, which includes instructions for building the element based on the template. To correct this error, provide the initial value of the variable in an initializer clause *on the same line*. ``` // CORRECTED export let someTemplate = '<h1>Greetings from Angular</h1>'; @Component({ selector: 'my-component', template: someTemplate }) export class MyComponent {} ``` Reference to a non-exported class --------------------------------- > *Reference to a non-exported class `<class name>`.* *Consider exporting the class.* > > Metadata referenced a class that wasn't exported. For example, you may have defined a class and used it as an injection token in a providers array but neglected to export that class. ``` // ERROR abstract class MyStrategy { } … providers: [ { provide: MyStrategy, useValue: … } ] … ``` Angular generates a class factory in a separate module and that factory [can only access exported classes](aot-compiler#exported-symbols). To correct this error, export the referenced class. ``` // CORRECTED export abstract class MyStrategy { } … providers: [ { provide: MyStrategy, useValue: … } ] … ``` Reference to a non-exported function ------------------------------------ > *Metadata referenced a function that wasn't exported.* > > For example, you may have set a providers `useFactory` property to a locally defined function that you neglected to export. ``` // ERROR function myStrategy() { … } … providers: [ { provide: MyStrategy, useFactory: myStrategy } ] … ``` Angular generates a class factory in a separate module and that factory [can only access exported functions](aot-compiler#exported-symbols). To correct this error, export the function. ``` // CORRECTED export function myStrategy() { … } … providers: [ { provide: MyStrategy, useFactory: myStrategy } ] … ``` Function calls are not supported -------------------------------- > *Function calls are not supported. Consider replacing the function or lambda with a reference to an exported function.* > > The compiler does not currently support [function expressions or lambda functions](aot-compiler#function-expression). For example, you cannot set a provider's `useFactory` to an anonymous function or arrow function like this. ``` // ERROR … providers: [ { provide: MyStrategy, useFactory: function() { … } }, { provide: OtherStrategy, useFactory: () => { … } } ] … ``` You also get this error if you call a function or method in a provider's `useValue`. ``` // ERROR import { calculateValue } from './utilities'; … providers: [ { provide: SomeValue, useValue: calculateValue() } ] … ``` To correct this error, export a function from the module and refer to the function in a `useFactory` provider instead. ``` // CORRECTED import { calculateValue } from './utilities'; export function myStrategy() { … } export function otherStrategy() { … } export function someValueFactory() { return calculateValue(); } … providers: [ { provide: MyStrategy, useFactory: myStrategy }, { provide: OtherStrategy, useFactory: otherStrategy }, { provide: SomeValue, useFactory: someValueFactory } ] … ``` Destructured variable or constant not supported ----------------------------------------------- > *Referencing an exported destructured variable or constant is not supported by the template compiler. Consider simplifying this to avoid destructuring.* > > The compiler does not support references to variables assigned by [destructuring](https://www.typescriptlang.org/docs/handbook/variable-declarations.html#destructuring). For example, you cannot write something like this: ``` // ERROR import { configuration } from './configuration'; // destructured assignment to foo and bar const {foo, bar} = configuration; … providers: [ {provide: Foo, useValue: foo}, {provide: Bar, useValue: bar}, ] … ``` To correct this error, refer to non-destructured values. ``` // CORRECTED import { configuration } from './configuration'; … providers: [ {provide: Foo, useValue: configuration.foo}, {provide: Bar, useValue: configuration.bar}, ] … ``` Could not resolve type ---------------------- > *The compiler encountered a type and can't determine which module exports that type.* > > This can happen if you refer to an ambient type. For example, the `Window` type is an ambient type declared in the global `.d.ts` file. You'll get an error if you reference it in the component constructor, which the compiler must statically analyze. ``` // ERROR @Component({ }) export class MyComponent { constructor (private win: Window) { … } } ``` TypeScript understands ambient types so you don't import them. The Angular compiler does not understand a type that you neglect to export or import. In this case, the compiler doesn't understand how to inject something with the `Window` token. Do not refer to ambient types in metadata expressions. If you must inject an instance of an ambient type, you can finesse the problem in four steps: 1. Create an injection token for an instance of the ambient type. 2. Create a factory function that returns that instance. 3. Add a `useFactory` provider with that factory function. 4. Use `@[Inject](../api/core/inject)` to inject the instance. Here's an illustrative example. ``` // CORRECTED import { Inject } from '@angular/core'; export const WINDOW = new InjectionToken('Window'); export function _window() { return window; } @Component({ … providers: [ { provide: WINDOW, useFactory: _window } ] }) export class MyComponent { constructor (@Inject(WINDOW) private win: Window) { … } } ``` The `Window` type in the constructor is no longer a problem for the compiler because it uses the `@[Inject](../api/core/inject)(WINDOW)` to generate the injection code. Angular does something similar with the `[DOCUMENT](../api/common/document)` token so you can inject the browser's `document` object (or an abstraction of it, depending upon the platform in which the application runs). ``` import { Inject } from '@angular/core'; import { DOCUMENT } from '@angular/common'; @Component({ … }) export class MyComponent { constructor (@Inject(DOCUMENT) private doc: Document) { … } } ``` Name expected ------------- > *The compiler expected a name in an expression it was evaluating.* > > This can happen if you use a number as a property name as in the following example. ``` // ERROR provider: [{ provide: Foo, useValue: { 0: 'test' } }] ``` Change the name of the property to something non-numeric. ``` // CORRECTED provider: [{ provide: Foo, useValue: { '0': 'test' } }] ``` Unsupported enum member name ---------------------------- > *Angular couldn't determine the value of the [enum member](https://www.typescriptlang.org/docs/handbook/enums.html) that you referenced in metadata.* > > The compiler can understand simple enum values but not complex values such as those derived from computed properties. ``` // ERROR enum Colors { Red = 1, White, Blue = "Blue".length // computed } … providers: [ { provide: BaseColor, useValue: Colors.White } // ok { provide: DangerColor, useValue: Colors.Red } // ok { provide: StrongColor, useValue: Colors.Blue } // bad ] … ``` Avoid referring to enums with complicated initializers or computed properties. Tagged template expressions are not supported --------------------------------------------- > *Tagged template expressions are not supported in metadata.* > > The compiler encountered a JavaScript ES2015 [tagged template expression](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Template_literals#Tagged_template_literals) such as the following. ``` // ERROR const expression = 'funky'; const raw = String.raw`A tagged template ${expression} string`; … template: '<div>' + raw + '</div>' … ``` [`String.raw()`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String/raw) is a *tag function* native to JavaScript ES2015. The AOT compiler does not support tagged template expressions; avoid them in metadata expressions. Symbol reference expected ------------------------- > *The compiler expected a reference to a symbol at the location specified in the error message.* > > This error can occur if you use an expression in the `extends` clause of a class. Last reviewed on Mon Feb 28 2022 angular Displaying values with interpolation Displaying values with interpolation ==================================== Prerequisites ------------- * [Basics of components](architecture-components) * [Basics of templates](glossary#template) * [Binding syntax](binding-syntax) Interpolation refers to embedding expressions into marked up text. By default, interpolation uses the double curly braces `{{` and `}}` as delimiters. To illustrate how interpolation works, consider an Angular component that contains a `currentCustomer` variable: ``` currentCustomer = 'Maria'; ``` Use interpolation to display the value of this variable in the corresponding component template: ``` <h3>Current customer: {{ currentCustomer }}</h3> ``` Angular replaces `currentCustomer` with the string value of the corresponding component property. In this case, the value is `Maria`. In the following example, Angular evaluates the `title` and `itemImageUrl` properties to display some title text and an image. ``` <p>{{title}}</p> <div><img alt="item" src="{{itemImageUrl}}"></div> ``` What's Next ----------- * [Property binding](property-binding) * [Event binding](event-binding) Last reviewed on Thu Apr 14 2022 angular Prepare to edit Angular documentation Prepare to edit Angular documentation ===================================== This topic describes the steps that prepare your local computer to edit and submit Angular documentation. > **IMPORTANT**: To submit changes to the Angular documentation, you must have: > > * A [GitHub](https://github.com "GitHub") account > * A signed [Contributor License Agreement](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#-signing-the-cla "Signing the CLA - Contributing to Angular | angular/angular | GitHub") > > Complete a contributor's license agreement ------------------------------------------ Review [Contributing to Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md). These sections are particularly important for documentation contributions: 1. Read the Angular [Code of conduct](https://github.com/angular/code-of-conduct/blob/main/CODE_OF_CONDUCT.md) 2. Read the [Submission guidelines](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#-submission-guidelines). > **NOTE**: The topics in this section explain these guidelines specifically for documentation contributions. > > 3. Read and complete the [Contributor license agreement](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#-signing-the-cla) that applies to you. Install the required software ----------------------------- To edit, build, and test Angular documentation on your local computer, you need the following software. The instructions in this section assume that you are using the software in this list to complete the tasks. Some software in this list, such as the integrated development environment (IDE), can be substituted with similar software. If you use a substitute IDE, you might need to adapt the instructions in this section to your IDE. For more information about the required software, see [Setting up the local environment and workspace](setup-local). * **Version control software** + [Git command line](https://github.com/git-guides/install-git) + [GitHub desktop](https://desktop.github.com) (optional) * **Integrated development environment** + [Visual Studio Code](https://code.visualstudio.com) * **Utility software** + [node.js](https://nodejs.org/en/download) Angular requires an [active long-term-support (LTS) or maintenance LTS version](https://nodejs.org/about/releases) of Node.js. + [nvm](https://github.com/nvm-sh/nvm#about) + [Yarn](https://yarnpkg.com/getting-started/install) + [Homebrew](https://brew.sh) for macOS or [Chocolatey](https://chocolatey.org/install) for Windows + [Vale](https://github.com/angular/angular/tree/main/aio/tools/doc-linter/README.md#install-vale-on-your-development-system "Install Vale on your development system - Angular documentation lint tool | angular/angular | Github") (see note) > **IMPORTANT**: Wait until after you clone your fork of the [`https://github.com/angular/angular`](https://github.com/angular/angular "angular/angular | GitHub") repo to your local computer before you configure Vale settings. > > You can also install other tools and IDE extensions that you find helpful. Set up your workspaces ---------------------- The Angular documentation is stored with the Angular framework code in a GitHub source code repository, also called a *repo*, at: <https://github.com/angular/angular>. To contribute documentation to Angular, you need: * A GitHub account * A *fork* of the Angular repo in your personal GitHub account. This guide refers to your personal GitHub account as `personal`. You must replace `personal` in a GitHub reference with your GitHub username. The URL: `https://github.com/personal` is not a valid GitHub account. For convenience, this documentation uses these shorthand references: + `angular/angular` Refers to the Angular repo. This is also known as the *upstream* repo. + `personal/angular` Refers to your personal fork of the Angular repo. Replace `personal` with your GitHub username to identify your specific repo. This is also known as the *origin* repo. * A *clone* of your `personal/angular` repo on your local computer GitHub repos are cloned into a `git` workspace on your local computer. With this workspace and required tools, you can build, edit, and review the documentation from your local computer. When you can build the documentation from a workspace on your local computer, you are ready to make major changes to the Angular documentation. For more detailed information about how to set up your workspace, see [Create your repo and workspaces for Angular documentation](doc-prepare-to-edit#create-your-repo-and-workspace-for-angular-documentation). For more detailed information about how to build and test the documentation from your local computer, see [Build and test the Angular documentation](doc-prepare-to-edit#build-and-test-the-angular-documentation). Create your repo and workspace for Angular documentation -------------------------------------------------------- This section describes how to create the repo and the `git` workspace necessary to edit, test, and submit changes to the Angular documentation. > **IMPORTANT**: Because `git` commands are not beginner friendly, the topics in this section include procedures that should reduce the chance of `git` mishaps. Fortunately, because you are working in your own account, even if you make a mistake, you can't harm any of the Angular code or documentation. > > To follow the procedures in these topics, you must use the repo and directory configuration presented in this topic. The procedures in these topics are designed to work with this configuration. > > If you use a different configuration, the procedures in these topics might not work as expected and you could lose some of your changes. > > The code and documentation for the Angular framework are stored in a public repository, or repo, on [github.com](https://github.com) in the `angular` account. The path to the Angular repo is <https://github.com/angular/angular>, hence the abbreviated name, `angular/angular`. [GitHub](https://github.com) is a cloud service that hosts many accounts and repositories. You can imagine the `angular/angular` repo in GitHub as shown in this image. ### Fork the `angular/angular` repo to your account As a public repo, `angular/angular` is available for anyone to read and copy, but not to change. While only specific accounts have permission to make changes to `angular/angular`, anyone with a GitHub account can request a change to it. Change requests to `angular/angular` are called *pull requests*. A pull request is created by one account to ask another account to pull in a change. Before you can open a pull request, you need a forked copy of `angular/angular` in your personal GitHub account. To get a forked copy of `angular/angular`, you fork the `angular/angular` repo into your personal GitHub account and end up with the repos shown in the following image. From the perspective of `personal/angular`, `angular/angular` is the upstream repo and `personal/angular` is the origin repo. #### To fork the angular repo to your account Perform this procedure in a browser. 1. Sign into your [GitHub](https://github.com) account. If you don't have a GitHub account, [create a new account](https://github.com/join "Join GitHub | GitHub") before you continue. 2. Navigate to [`https://github.com/angular/angular`](https://github.com/angular/angular "angular/angular | GitHub"). 3. In [`https://github.com/angular/angular`](https://github.com/angular/angular "angular/angular | GitHub"), click the **Fork** button near the top-right corner of the page. This image is from the top of the [`https://github.com/angular/angular`](https://github.com/angular/angular "angular/angular | GitHub") page and shows the **Fork** button. 4. In **Create a new fork**: 1. Accept the default values in **Owner** and **Repository name**. 2. Confirm that **Copy the `main` branch only** is checked. 3. Click **Create repository**. The forking process can take a few minutes. 5. You now have a copy of the `angular/angular` repo in your GitHub account. After your fork of `angular/angular` is ready, your browser opens the web page of the forked repo in your GitHub account. In this image, notice that the account now shows the username of your personal GitHub account instead of the `angular` account. As a forked repo, your new repo maintains a reference to `angular/angular`. From your account, `git` considers your `personal/angular` repo as the origin repo and `angular/angular` as the upstream repo. You can think of this as: your changes originate in the *origin* repo and you send them *upstream* to the `angular/angular` repo. The message below the repo name in your account, `forked from angular/angular`, contains a link back to the upstream repo. This relationship comes into play later, such as when you update your `personal/angular` repo and when you open a pull request. ### Create a git workspace on your local computer A `git` workspace on your local computer is where copies of GitHub repos in the cloud are stored on your local computer. To edit Angular documentation on your local computer, you need a clone of your origin repo, `personal/angular`. Clone the `personal/angular` repo into the subdirectory for your account, as this illustration shows. Remember to replace `personal` with your GitHub username. The `personal/angular` directory in your workspace becomes your `working` directory. You do your editing in the working directory of your `personal/angular` repo. Cloning a repo duplicates the repo that's in the cloud on your local computer. There are procedures to keep the clone on your local computer in sync with the repo in the cloud that are described later. #### To clone the Angular repo into your workspace Perform these steps in a command-line tool on your local computer. 1. Navigate to the `workspace` directory. In this example, this is the directory named, `github-projects`. If this directory isn't on your local computer, create it, and then navigate to it before you continue. 2. From the workspace directory, run this command to create a directory for the repo from your `personal` account Remember to replace `personal` with your GitHub username. ``` mkdir personal ``` 3. From the workspace directory, run this command to clone the origin `personal/angular` repo into the `personal` account directory. Remember to replace `personal` with your GitHub username. ``` git clone https://github.com/personal/angular personal/angular ``` Your local computer is now configured as shown in the following illustration. Your `working` directory is the `personal/angular` directory in your `git` workspace directory. This directory and its subdirectories have the files that you edit to fix documentation issues. ### Complete the software installation After you clone the origin repo on your local computer, run these commands from a command-line tool: 1. Install the npm modules used by the Angular project. In a command line tool on your local computer: 1. Navigate to your `git` workspace. In this example, this is the `github-projects` directory. 2. In your `git` workspace, run this command to navigate to the documentation root directory in your clone of the `personal/angular` repo. Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 3. Run this command to install the Angular dependencies. ``` yarn ``` 4. Run this command to navigate to the documentation project. ``` cd aio ``` 5. Run this command to install the npm modules for the documentation. ``` yarn ``` 2. Locate `angular/aio/tools/doc-linter/vale.ini` in your working directory to use in the next step as the path to the configuration file in the **Vale:Config** setting. 3. [Install Vale](https://github.com/angular/angular/tree/main/aio/tools/doc-linter/README.md#install-vale-on-your-development-system "Install Vale on your development system - Angular documentation lint tool | angular/angular | Github") to complete the software installation. Build and test the Angular documentation ---------------------------------------- Angular provides tools to build and test the documentation. To review your work and before you submit your updates in a pull request, be sure to build, test, and verify your changes using these tools. > Note that the instructions found in <https://github.com/angular/angular/blob/main/docs/DEVELOPER.md> are to build and test the Angular framework and not the Angular documentation. > > The procedures on this page build only the Angular documentation. You don't need to build the Angular framework to build the Angular documentation. > > ### To navigate to the Angular documentation directory Perform these steps from a command-line tool on your local computer. 1. Navigate to the Angular documentation in the working directory of your account in your `git` workspace on your local computer. 2. Navigate to your `git` workspace directory. In this example, this is the `github-projects` directory. 1. Run this command to navigate to the working directory with the `angular` repo you forked to your personal account. Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to navigate to the Angular documentation directory. ``` cd aio ``` The Angular documentation directory is the root of the Angular documentation files. These directories in the `angular/aio` directory are where you find the files that are edited the most. | Directory | Files | | --- | --- | | `angular/aio/content` | Files and other assets used in the Angular documentation | | `angular/aio/content/guide` | The markdown files for most Angular documentation | | `angular/aio/content/tutorial` | The markdown files used by the Tour of Heroes tutorial | The Angular documentation source has many other directories in `angular/aio` but they don't change often. ### To build and view the Angular documentation on your computer Perform these steps from a command-line tool on your local computer. 1. Build the Angular documentation. 1. From the Angular documentation directory, run this command: ``` yarn build ``` 2. If building the documentation reports one or more errors, fix the errors and repeat the previous step before you continue. 2. Start the local documentation server. 1. From the documentation directory, run this command: ``` yarn start ``` 2. Open a browser on your local computer and view your documentation at `https://localhost:4200`. 3. Review the documentation in the browser. ### To run the automated tests on the Angular documentation Perform these steps from a command-line tool on your local computer. 1. [Navigate to the documentation directory](doc-prepare-to-edit#to-navigate-to-the-angular-documentation-directory), if you're not already there. 2. From the documentation directory, run this command to build the documentation before you test it: ``` yarn build ``` 3. If building the documentation returns one or more errors, fix those and build the documentation again before you continue. 4. From the documentation directory, run these commands to start the automated tests that verify the docs are consistent. These are most, but not all, of the tests that are performed after you open your pull request. Some tests can only be run in the automated testing environment. ``` yarn e2e yarn docs-test ``` When you run these tests on your documentation updates, be sure to correct any errors before you open a pull request. Next steps ---------- After you build the documentation from your forked repo on your local computer and the tests run without error, you are ready to continue. You have successfully configured your local computer to edit Angular documentation and open pull requests. Continue to the other topics in this section for information about how to perform other documentation tasks. Last reviewed on Wed Oct 12 2022
programming_docs
angular NgModules NgModules ========= **NgModules** configure the injector and the compiler and help organize related things together. An NgModule is a class marked by the `@[NgModule](../api/core/ngmodule)` decorator. `@[NgModule](../api/core/ngmodule)` takes a metadata object that describes how to compile a component's template and how to create an injector at runtime. It identifies the module's own components, directives, and pipes, making some of them public, through the `exports` property, so that external components can use them. `@[NgModule](../api/core/ngmodule)` can also add service providers to the application dependency injectors. For an example application showcasing all the techniques that NgModules related pages cover, see the live example. For explanations on the individual techniques, visit the relevant NgModule pages under the NgModules section. Angular modularity ------------------ Modules are a great way to organize an application and extend it with capabilities from external libraries. Angular libraries are NgModules, such as `[FormsModule](../api/forms/formsmodule)`, `[HttpClientModule](../api/common/http/httpclientmodule)`, and `[RouterModule](../api/router/routermodule)`. Many third-party libraries are available as NgModules such as [Material Design](https://material.angular.io), [Ionic](https://ionicframework.com), and [AngularFire2](https://github.com/angular/angularfire2). NgModules consolidate components, directives, and pipes into cohesive blocks of functionality, each focused on a feature area, application business domain, workflow, or common collection of utilities. Modules can also add services to the application. Such services might be internally developed, like something you'd develop yourself or come from outside sources, such as the Angular router and HTTP client. Modules can be loaded eagerly when the application starts or lazy loaded asynchronously by the router. NgModule metadata does the following: * Declares which components, directives, and pipes belong to the module * Makes some of those components, directives, and pipes public so that other module's component templates can use them * Imports other modules with the components, directives, and pipes that components in the current module need * Provides services that other application components can use Every Angular application has at least one module, the root module. You [bootstrap](bootstrapping) that module to launch the application. The root module is all you need in an application with few components. As the application grows, you refactor the root module into [feature modules](feature-modules) that represent collections of related functionality. You then import these modules into the root module. The basic NgModule ------------------ The [Angular CLI](cli) generates the following basic `AppModule` when creating a new application. ``` // imports import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; // @NgModule decorator with its metadata @NgModule({ declarations: [AppComponent], imports: [BrowserModule], providers: [], bootstrap: [AppComponent] }) export class AppModule {} ``` At the top are the import statements. The next section is where you configure the `@[NgModule](../api/core/ngmodule)` by stating what components and directives belong to it (`declarations`) as well as which other modules it uses (`imports`). For more information on the structure of an `@[NgModule](../api/core/ngmodule)`, be sure to read [Bootstrapping](bootstrapping). More on NgModules ----------------- You may also be interested in the following: * [Feature Modules](feature-modules) * [Entry Components](entry-components) * [Providers](providers) * [Types of NgModules](module-types) Last reviewed on Mon Feb 28 2022 angular Template syntax Template syntax =============== In Angular, a *template* is a chunk of HTML. Use special syntax within a template to build on many of Angular's features. Prerequisites ------------- Before learning template syntax, you should be familiar with the following: * [Angular concepts](architecture) * JavaScript * HTML * CSS Each Angular template in your application is a section of HTML to include as a part of the page that the browser displays. An Angular HTML template renders a view, or user interface, in the browser, just like regular HTML, but with a lot more functionality. When you generate an Angular application with the Angular CLI, the `app.component.html` file is the default template containing placeholder HTML. The template syntax guides show you how to control the UX/UI by coordinating data between the class and the template. > Most of the Template Syntax guides have dedicated working example applications that demonstrate the individual topic of each guide. To see all of them working together in one application, see the comprehensive . > > Empower your HTML ----------------- Extend the HTML vocabulary of your applications With special Angular syntax in your templates. For example, Angular helps you get and set DOM (Document Object Model) values dynamically with features such as built-in template functions, variables, event listening, and data binding. Almost all HTML syntax is valid template syntax. However, because an Angular template is part of an overall webpage, and not the entire page, you don't need to include elements such as `<html>`, `<body>`, or `<base>`, and can focus exclusively on the part of the page you are developing. > To eliminate the risk of script injection attacks, Angular does not support the `<script>` element in templates. Angular ignores the `<script>` tag and outputs a warning to the browser console. For more information, see the [Security](security) page. > > More on template syntax ----------------------- You might also be interested in the following: | Topics | Details | | --- | --- | | [Interpolation](interpolation) | Learn how to use interpolation and expressions in HTML. | | [Template statements](template-statements) | Respond to events in your templates. | | [Binding syntax](binding-syntax) | Use binding to coordinate values in your application. | | [Property binding](property-binding) | Set properties of target elements or directive `@[Input](../api/core/input)()` decorators. | | [Attribute, class, and style bindings](attribute-binding) | Set the value of attributes, classes, and styles. | | [Event binding](event-binding) | Listen for events and your HTML. | | [Two-way binding](two-way-binding) | Share data between a class and its template. | | [Built-in directives](built-in-directives) | Listen to and modify the behavior and layout of HTML. | | [Template reference variables](template-reference-variables) | Use special variables to reference a DOM element within a template. | | [Inputs and Outputs](inputs-outputs) | Share data between the parent context and child directives or components | | [Template expression operators](template-expression-operators) | Learn about the pipe operator (`|`), and protect against `null` or `undefined` values in your HTML. | | [SVG in templates](svg-in-templates) | Dynamically generate interactive graphics. | Last reviewed on Mon Feb 28 2022 angular Launching your app with a root module Launching your app with a root module ===================================== Prerequisites ------------- A basic understanding of the following: * [JavaScript Modules vs. NgModules](ngmodule-vs-jsmodule) An NgModule describes how the application parts fit together. Every application has at least one Angular module, the *root* module, which must be present for bootstrapping the application on launch. By convention and by default, this NgModule is named `AppModule`. When you use the [Angular CLI](cli) command `ng new` to generate an app, the default `AppModule` looks like the following: ``` /* JavaScript imports */ import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; /* the AppModule class with the @NgModule decorator */ @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` After the import statements is a class with the **`@[NgModule](../api/core/ngmodule)`** [decorator](glossary#decorator "\"Decorator\" explained"). The `@[NgModule](../api/core/ngmodule)` decorator identifies `AppModule` as an `[NgModule](../api/core/ngmodule)` class. `@[NgModule](../api/core/ngmodule)` takes a metadata object that tells Angular how to compile and launch the application. | metadata object | Details | | --- | --- | | declarations | This application's lone component. | | imports | Import `[BrowserModule](../api/platform-browser/browsermodule)` to have browser-specific services such as DOM rendering, sanitization, and location. | | providers | The service providers. | | bootstrap | The *root* component that Angular creates and inserts into the `index.html` host web page. | The default application created by the Angular CLI only has one component, `AppComponent`, so it is in both the `declarations` and the `bootstrap` arrays. The `declarations` array ------------------------ The module's `declarations` array tells Angular which components belong to that module. As you create more components, add them to `declarations`. You must declare every component in exactly one `[NgModule](../api/core/ngmodule)` class. If you use a component without declaring it, Angular returns an error message. The `declarations` array only takes declarables. Declarables are components, [directives](attribute-directives), and <pipes>. All of a module's declarables must be in the `declarations` array. Declarables must belong to exactly one module. The compiler emits an error if you try to declare the same class in more than one module. These declared classes are visible within the module but invisible to components in a different module, unless they are exported from this module and the other module imports this one. An example of what goes into a declarations array follows: ``` declarations: [ YourComponent, YourPipe, YourDirective ], ``` A declarable can only belong to one module, so only declare it in one `@[NgModule](../api/core/ngmodule)`. When you need it elsewhere, import the module that contains the declarable you need. ### Using directives with `@[NgModule](../api/core/ngmodule)` Use the `declarations` array for directives. To use a directive, component, or pipe in a module, you must do a few things: 1. Export it from the file where you wrote it. 2. Import it into the appropriate module. 3. Declare it in the `@[NgModule](../api/core/ngmodule)` `declarations` array. Those three steps look like the following. In the file where you create your directive, export it. The following example, named `ItemDirective` is the default directive structure that the CLI generates in its own file, `item.directive.ts`: ``` import { Directive } from '@angular/core'; @Directive({ selector: '[appItem]' }) export class ItemDirective { // code goes here constructor() { } } ``` The key point here is that you have to export it, so that you can import it elsewhere. Next, import it into the `[NgModule](../api/core/ngmodule)`, in this example `app.module.ts`, with a JavaScript import statement: ``` import { ItemDirective } from './item.directive'; ``` And in the same file, add it to the `@[NgModule](../api/core/ngmodule)` `declarations` array: ``` declarations: [ AppComponent, ItemDirective ], ``` Now you could use your `ItemDirective` in a component. This example uses `AppModule`, but you'd do it the same way for a feature module. For more about directives, see [Attribute Directives](attribute-directives) and [Structural Directives](structural-directives). You'd also use the same technique for <pipes> and components. Remember, components, directives, and pipes belong to one module only. You only need to declare them once in your application because you share them by importing the necessary modules. This saves you time and helps keep your application lean. The `imports` array ------------------- The module's `imports` array appears exclusively in the `@[NgModule](../api/core/ngmodule)` metadata object. It tells Angular about other NgModules that this particular module needs to function properly. ``` imports: [ BrowserModule, FormsModule, HttpClientModule ], ``` This list of modules are those that export components, directives, or pipes that component templates in this module reference. In this case, the component is `AppComponent`, which references components, directives, or pipes in `[BrowserModule](../api/platform-browser/browsermodule)`, `[FormsModule](../api/forms/formsmodule)`, or `[HttpClientModule](../api/common/http/httpclientmodule)`. A component template can reference another component, directive, or pipe when the referenced class is declared in this module, or the class was imported from another module. The `providers` array --------------------- The providers array is where you list the services the application needs. When you list services here, they are available app-wide. You can scope them when using feature modules and lazy loading. For more information, see [Providers](providers). The `bootstrap` array --------------------- The application launches by bootstrapping the root `AppModule`, which is also referred to as an `entryComponent`. Among other things, the bootstrapping process creates the component(s) listed in the `bootstrap` array and inserts each one into the browser DOM. Each bootstrapped component is the base of its own tree of components. Inserting a bootstrapped component usually triggers a cascade of component creations that fill out that tree. While you can put more than one component tree on a host web page, most applications have only one component tree and bootstrap a single root component. This one root component is usually called `AppComponent` and is in the root module's `bootstrap` array. In a situation where you want to bootstrap a component based on an API response, or you want to mount the `AppComponent` in a different DOM node that doesn't match the component selector, please refer to `[ApplicationRef.bootstrap()](../api/core/applicationref#bootstrap)` documentation. More about Angular Modules -------------------------- For more on NgModules you're likely to see frequently in applications, see [Frequently Used Modules](frequent-ngmodules). Last reviewed on Mon Feb 28 2022 angular Sharing data between child and parent directives and components Sharing data between child and parent directives and components =============================================================== A common pattern in Angular is sharing data between a parent component and one or more child components. Implement this pattern with the `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` decorators. > See the live example for a working example containing the code snippets in this guide. > > Consider the following hierarchy: ``` <parent-component> <child-component></child-component> </parent-component> ``` The `<parent-component>` serves as the context for the `<child-component>`. `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` give a child component a way to communicate with its parent component. `@[Input](../api/core/input)()` lets a parent component update data in the child component. Conversely, `@[Output](../api/core/output)()` lets the child send data to a parent component. Sending data to a child component --------------------------------- The `@[Input](../api/core/input)()` decorator in a child component or directive signifies that the property can receive its value from its parent component. To use `@[Input](../api/core/input)()`, you must configure the parent and child. ### Configuring the child component To use the `@[Input](../api/core/input)()` decorator in a child component class, first import `[Input](../api/core/input)` and then decorate the property with `@[Input](../api/core/input)()`, as in the following example. ``` import { Component, Input } from '@angular/core'; // First, import Input export class ItemDetailComponent { @Input() item = ''; // decorate the property with @Input() } ``` In this case, `@[Input](../api/core/input)()` decorates the property `item`, which has a type of `string`, however, `@[Input](../api/core/input)()` properties can have any type, such as `number`, `string`, `boolean`, or `object`. The value for `item` comes from the parent component. Next, in the child component template, add the following: ``` <p> Today's item: {{item}} </p> ``` ### Configuring the parent component The next step is to bind the property in the parent component's template. In this example, the parent component template is `app.component.html`. 1. Use the child's selector, here `<app-item-detail>`, as a directive within the parent component template. 2. Use [property binding](property-binding) to bind the `item` property in the child to the `currentItem` property of the parent. ``` <app-item-detail [item]="currentItem"></app-item-detail> ``` 3. In the parent component class, designate a value for `currentItem`: ``` export class AppComponent { currentItem = 'Television'; } ``` With `@[Input](../api/core/input)()`, Angular passes the value for `currentItem` to the child so that `item` renders as `Television`. The following diagram shows this structure: The target in the square brackets, `[]`, is the property you decorate with `@[Input](../api/core/input)()` in the child component. The binding source, the part to the right of the equal sign, is the data that the parent component passes to the nested component. ### Watching for `@[Input](../api/core/input)()` changes To watch for changes on an `@[Input](../api/core/input)()` property, use `[OnChanges](../api/core/onchanges)`, one of Angular's [lifecycle hooks](lifecycle-hooks). See the [`OnChanges`](lifecycle-hooks#onchanges) section of the [Lifecycle Hooks](lifecycle-hooks) guide for more details and examples. Sending data to a parent component ---------------------------------- The `@[Output](../api/core/output)()` decorator in a child component or directive lets data flow from the child to the parent. `@[Output](../api/core/output)()` marks a property in a child component as a doorway through which data can travel from the child to the parent. The child component uses the `@[Output](../api/core/output)()` property to raise an event to notify the parent of the change. To raise an event, an `@[Output](../api/core/output)()` must have the type of `[EventEmitter](../api/core/eventemitter)`, which is a class in `@angular/core` that you use to emit custom events. The following example shows how to set up an `@[Output](../api/core/output)()` in a child component that pushes data from an HTML `<input>` to an array in the parent component. To use `@[Output](../api/core/output)()`, you must configure the parent and child. ### Configuring the child component The following example features an `<input>` where a user can enter a value and click a `<button>` that raises an event. The `[EventEmitter](../api/core/eventemitter)` then relays the data to the parent component. 1. Import `[Output](../api/core/output)` and `[EventEmitter](../api/core/eventemitter)` in the child component class: ``` import { Output, EventEmitter } from '@angular/core'; ``` 2. In the component class, decorate a property with `@[Output](../api/core/output)()`. The following example `newItemEvent` `@[Output](../api/core/output)()` has a type of `[EventEmitter](../api/core/eventemitter)`, which means it's an event. ``` @Output() newItemEvent = new EventEmitter<string>(); ``` The different parts of the preceding declaration are as follows: | Declaration parts | Details | | --- | --- | | `@[Output](../api/core/output)()` | A decorator function marking the property as a way for data to go from the child to the parent. | | `newItemEvent` | The name of the `@[Output](../api/core/output)()`. | | `[EventEmitter](../api/core/eventemitter)<string>` | The `@[Output](../api/core/output)()`'s type. | | `new [EventEmitter](../api/core/eventemitter)<string>()` | Tells Angular to create a new event emitter and that the data it emits is of type string. | For more information on `[EventEmitter](../api/core/eventemitter)`, see the [EventEmitter API documentation](../api/core/eventemitter). 3. Create an `addNewItem()` method in the same component class: ``` export class ItemOutputComponent { @Output() newItemEvent = new EventEmitter<string>(); addNewItem(value: string) { this.newItemEvent.emit(value); } } ``` The `addNewItem()` function uses the `@[Output](../api/core/output)()`, `newItemEvent`, to raise an event with the value the user types into the `<input>`. ### Configuring the child's template The child's template has two controls. The first is an HTML `<input>` with a [template reference variable](template-reference-variables), `#newItem`, where the user types in an item name. The `value` property of the `#newItem` variable stores what the user types into the `<input>`. ``` <label for="item-input">Add an item:</label> <input type="text" id="item-input" #newItem> <button type="button" (click)="addNewItem(newItem.value)">Add to parent's list</button> ``` The second element is a `<button>` with a `click` [event binding](event-binding). The `(click)` event is bound to the `addNewItem()` method in the child component class. The `addNewItem()` method takes as its argument the value of the `#newItem.value` property. ### Configuring the parent component The `AppComponent` in this example features a list of `items` in an array and a method for adding more items to the array. ``` export class AppComponent { items = ['item1', 'item2', 'item3', 'item4']; addItem(newItem: string) { this.items.push(newItem); } } ``` The `addItem()` method takes an argument in the form of a string and then adds that string to the `items` array. ### Configuring the parent's template 1. In the parent's template, bind the parent's method to the child's event. 2. Put the child selector, here `<app-item-output>`, within the parent component's template, `app.component.html`. ``` <app-item-output (newItemEvent)="addItem($event)"></app-item-output> ``` The event binding, `(newItemEvent)='addItem($event)'`, connects the event in the child, `newItemEvent`, to the method in the parent, `addItem()`. The `$event` contains the data that the user types into the `<input>` in the child template UI. To see the `@[Output](../api/core/output)()` working, add the following to the parent's template: ``` <ul> <li *ngFor="let item of items">{{item}}</li> </ul> ``` The `*[ngFor](../api/common/ngfor)` iterates over the items in the `items` array. When you enter a value in the child's `<input>` and click the button, the child emits the event and the parent's `addItem()` method pushes the value to the `items` array and new item renders in the list. Using `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` together ------------------------------------------------------------------------------------ Use `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` on the same child component as follows: ``` <app-input-output [item]="currentItem" (deleteRequest)="crossOffItem($event)"> </app-input-output> ``` The target, `item`, which is an `@[Input](../api/core/input)()` property in the child component class, receives its value from the parent's property, `currentItem`. When you click delete, the child component raises an event, `deleteRequest`, which is the argument for the parent's `crossOffItem()` method. The following diagram shows the different parts of the `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` on the `<app-input-output>` child component. The child selector is `<app-input-output>` with `item` and `deleteRequest` being `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` properties in the child component class. The property `currentItem` and the method `crossOffItem()` are both in the parent component class. To combine property and event bindings using the banana-in-a-box syntax, `[()]`, see [Two-way Binding](two-way-binding). Last reviewed on Mon Feb 28 2022
programming_docs
angular Introduction to Angular concepts Introduction to Angular concepts ================================ Angular is a platform and framework for building single-page client applications using HTML and TypeScript. Angular is written in TypeScript. It implements core and optional functionality as a set of TypeScript libraries that you import into your applications. The architecture of an Angular application relies on certain fundamental concepts. The basic building blocks of the Angular framework are Angular components that are organized into *NgModules*. NgModules collect related code into functional sets; an Angular application is defined by a set of NgModules. An application always has at least a *root module* that enables bootstrapping, and typically has many more *feature modules*. * Components define *views*, which are sets of screen elements that Angular can choose among and modify according to your program logic and data * Components use *services*, which provide specific functionality not directly related to views. Service providers can be *injected* into components as *dependencies*, making your code modular, reusable, and efficient. Modules, components and services are classes that use *decorators*. These decorators mark their type and provide metadata that tells Angular how to use them. * The metadata for a component class associates it with a *template* that defines a view. A template combines ordinary HTML with Angular *directives* and *binding markup* that allow Angular to modify the HTML before rendering it for display. * The metadata for a service class provides the information Angular needs to make it available to components through *dependency injection (DI)* An application's components typically define many views, arranged hierarchically. Angular provides the `[Router](../api/router/router)` service to help you define navigation paths among views. The router provides sophisticated in-browser navigational capabilities. > See the [Angular Glossary](glossary) for basic definitions of important Angular terms and usage. > > > For the sample application that this page describes, see the live example. > > Modules ------- Angular *NgModules* differ from and complement JavaScript (ES2015) modules. An NgModule declares a compilation context for a set of components that is dedicated to an application domain, a workflow, or a closely related set of capabilities. An NgModule can associate its components with related code, such as services, to form functional units. Every Angular application has a *root module*, conventionally named `AppModule`, which provides the bootstrap mechanism that launches the application. An application typically contains many functional modules. Like JavaScript modules, NgModules can import functionality from other NgModules, and allow their own functionality to be exported and used by other NgModules. For example, to use the router service in your app, you import the `[Router](../api/router/router)` NgModule. Organizing your code into distinct functional modules helps in managing development of complex applications, and in designing for reusability. In addition, this technique lets you take advantage of *lazy-loading* —that is, loading modules on demand— to minimize the amount of code that needs to be loaded at startup. > For a more detailed discussion, see [Introduction to modules](architecture-modules). > > Components ---------- Every Angular application has at least one component, the *root component* that connects a component hierarchy with the page document object model (DOM). Each component defines a class that contains application data and logic, and is associated with an HTML *template* that defines a view to be displayed in a target environment. The `@[Component](../api/core/component)()` decorator identifies the class immediately below it as a component, and provides the template and related component-specific metadata. > Decorators are functions that modify JavaScript classes. Angular defines a number of decorators that attach specific kinds of metadata to classes, so that the system knows what those classes mean and how they should work. > > [Learn more about decorators on the web.](https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841#.x5c2ndtx0) > > ### Templates, directives, and data binding A template combines HTML with Angular markup that can modify HTML elements before they are displayed. Template *directives* provide program logic, and *binding markup* connects your application data and the DOM. There are two types of data binding: | Data bindings | Details | | --- | --- | | Event binding | Lets your application respond to user input in the target environment by updating your application data. | | Property binding | Lets you interpolate values that are computed from your application data into the HTML. | Before a view is displayed, Angular evaluates the directives and resolves the binding syntax in the template to modify the HTML elements and the DOM, according to your program data and logic. Angular supports *two-way data binding*, meaning that changes in the DOM, such as user choices, are also reflected in your program data. Your templates can use *pipes* to improve the user experience by transforming values for display. For example, use pipes to display dates and currency values that are appropriate for a user's locale. Angular provides predefined pipes for common transformations, and you can also define your own pipes. > For a more detailed discussion of these concepts, see [Introduction to components](architecture-components). > > Services and dependency injection --------------------------------- For data or logic that isn't associated with a specific view, and that you want to share across components, you create a *service* class. A service class definition is immediately preceded by the `@[Injectable](../api/core/injectable)()` decorator. The decorator provides the metadata that allows other providers to be **injected** as dependencies into your class. *Dependency injection* (DI) lets you keep your component classes lean and efficient. They don't fetch data from the server, validate user input, or log directly to the console; they delegate such tasks to services. > For a more detailed discussion, see [Introduction to services and DI](architecture-services). > > ### Routing The Angular `[Router](../api/router/router)` NgModule provides a service that lets you define a navigation path among the different application states and view hierarchies in your application. It is modeled on the familiar browser navigation conventions: * Enter a URL in the address bar and the browser navigates to a corresponding page * Click links on the page and the browser navigates to a new page * Click the browser's back and forward buttons and the browser navigates backward and forward through the history of pages you've seen The router maps URL-like paths to views instead of pages. When a user performs an action, such as clicking a link, that would load a new page in the browser, the router intercepts the browser's behavior, and shows or hides view hierarchies. If the router determines that the current application state requires particular functionality, and the module that defines it hasn't been loaded, the router can *lazy-load* the module on demand. The router interprets a link URL according to your application's view navigation rules and data state. You can navigate to new views when the user clicks a button or selects from a drop box, or in response to some other stimulus from any source. The router logs activity in the browser's history, so the back and forward buttons work as well. To define navigation rules, you associate *navigation paths* with your components. A path uses a URL-like syntax that integrates your program data, in much the same way that template syntax integrates your views with your program data. You can then apply program logic to choose which views to show or to hide, in response to user input and your own access rules. > For a more detailed discussion, see [Routing and navigation](router). > > What's next ----------- You've learned the basics about the main building blocks of an Angular application. The following diagram shows how these basic pieces are related. * Together, a component and template define an Angular view + A decorator on a component class adds the metadata, including a pointer to the associated template + Directives and binding markup in a component's template modify views based on program data and logic * The dependency injector provides services to a component, such as the router service that lets you define navigation among views Each of these subjects is introduced in more detail in the following pages. * [Introduction to Modules](architecture-modules) * [Introduction to Components](architecture-components) + [Templates and views](architecture-components#templates-and-views) + [Component metadata](architecture-components#component-metadata) + [Data binding](architecture-components#data-binding) + [Directives](architecture-components#directives) + [Pipes](architecture-components#pipes) * [Introduction to services and dependency injection](architecture-services) When you're familiar with these fundamental building blocks, you can explore them in more detail in the documentation. To learn about more tools and techniques that are available to help you build and deploy Angular applications, see [Next steps: tools and techniques](architecture-next-steps). Last reviewed on Mon Feb 28 2022 angular Getting started with standalone components Getting started with standalone components ========================================== **Standalone components** provide a simplified way to build Angular applications. Standalone components, directives, and pipes aim to streamline the authoring experience by reducing the need for `[NgModule](../api/core/ngmodule)`s. Existing applications can optionally and incrementally adopt the new standalone style without any breaking changes. Creating standalone components ------------------------------ ### The `standalone` flag and component `imports` Components, directives, and pipes can now be marked as `standalone: true`. Angular classes marked as standalone do not need to be declared in an `[NgModule](../api/core/ngmodule)` (the Angular compiler will report an error if you try). Standalone components specify their dependencies directly instead of getting them through `[NgModule](../api/core/ngmodule)`s. For example, if `PhotoGalleryComponent` is a standalone component, it can directly import another standalone component `ImageGridComponent`: ``` @Component({ standalone: true, selector: 'photo-gallery', imports: [ImageGridComponent], template: ` ... <image-grid [images]="imageList"></image-grid> `, }) export class PhotoGalleryComponent { // component logic } ``` `imports` can also be used to reference standalone directives and pipes. In this way, standalone components can be written without the need to create an `[NgModule](../api/core/ngmodule)` to manage template dependencies. ### Using existing NgModules in a standalone component When writing a standalone component, you may want to use other components, directives, or pipes in the component's template. Some of those dependencies might not be marked as standalone, but instead declared and exported by an existing `[NgModule](../api/core/ngmodule)`. In this case, you can import the `[NgModule](../api/core/ngmodule)` directly into the standalone component: ``` @Component({ standalone: true, selector: 'photo-gallery', // an existing module is imported directly into a standalone component imports: [MatButtonModule], template: ` ... <button mat-button>Next Page</button> `, }) export class PhotoGalleryComponent { // logic } ``` You can use standalone components with existing `[NgModule](../api/core/ngmodule)`-based libraries or dependencies in your template. Standalone components can take full advantage of the existing ecosystem of Angular libraries. Using standalone components in NgModule-based applications ---------------------------------------------------------- Standalone components can also be imported into existing NgModules-based contexts. This allows existing applications (which are using NgModules today) to incrementally adopt the new, standalone style of component. You can import a standalone component (or directive, or pipe) just like you would an `[NgModule](../api/core/ngmodule)` - using `[NgModule.imports](../api/core/ngmodule#imports)`: ``` @NgModule({ declarations: [AlbumComponent], exports: [AlbumComponent], imports: [PhotoGalleryComponent], }) export class AlbumModule {} ``` Bootstrapping an application using a standalone component --------------------------------------------------------- An Angular application can be bootstrapped without any `[NgModule](../api/core/ngmodule)` by using a standalone component as the application's root component. This is done using the `[bootstrapApplication](../api/platform-browser/bootstrapapplication)` API: ``` // in the main.ts file import {bootstrapApplication} from '@angular/platform-browser'; import {PhotoAppComponent} from './app/photo.app.component'; bootstrapApplication(PhotoAppComponent); ``` ### Configuring dependency injection When bootstrapping an application, often you want to configure Angular’s dependency injection and provide configuration values or services for use throughout the application. You can pass these as providers to `[bootstrapApplication](../api/platform-browser/bootstrapapplication)`: ``` bootstrapApplication(PhotoAppComponent, { providers: [ {provide: BACKEND_URL, useValue: 'https://photoapp.looknongmodules.com/api'}, // ... ] }); ``` The standalone bootstrap operation is based on explicitly configuring a list of `[Provider](../api/core/provider)`s for dependency injection. In Angular, `provide`-prefixed functions can be used to configure different systems without needing to import NgModules. For example, `[provideRouter](../api/router/providerouter)` is used in place of `RouterModule.forRoot` to configure the router: ``` bootstrapApplication(PhotoAppComponent, { providers: [ {provide: BACKEND_URL, useValue: 'https://photoapp.looknongmodules.com/api'}, provideRouter([/* app routes */]), // ... ] }); ``` Many third party libraries have also been updated to support this `provide`-function configuration pattern. If a library only offers an NgModule API for its DI configuration, you can use the `[importProvidersFrom](../api/core/importprovidersfrom)` utility to still use it with `[bootstrapApplication](../api/platform-browser/bootstrapapplication)` and other standalone contexts: ``` import {LibraryModule} from 'ngmodule-based-library'; bootstrapApplication(PhotoAppComponent, { providers: [ {provide: BACKEND_URL, useValue: 'https://photoapp.looknongmodules.com/api'}, importProvidersFrom( LibraryModule.forRoot() ), ] }); ``` Routing and lazy-loading ------------------------ The router APIs were updated and simplified to take advantage of the standalone components: an `[NgModule](../api/core/ngmodule)` is no longer required in many common, lazy-loading scenarios. ### Lazy loading a standalone component Any route can lazily load its routed, standalone component by using `loadComponent`: ``` export const ROUTES: Route[] = [ {path: 'admin', loadComponent: () => import('./admin/panel.component').then(mod => mod.AdminPanelComponent)}, // ... ]; ``` This works as long as the loaded component is standalone. ### Lazy loading many routes at once The `loadChildren` operation now supports loading a new set of child `[Route](../api/router/route)`s without needing to write a lazy loaded `[NgModule](../api/core/ngmodule)` that imports `RouterModule.forChild` to declare the routes. This works when every route loaded this way is using a standalone component. ``` // In the main application: export const ROUTES: Route[] = [ {path: 'admin', loadChildren: () => import('./admin/routes').then(mod => mod.ADMIN_ROUTES)}, // ... ]; // In admin/routes.ts: export const ADMIN_ROUTES: Route[] = [ {path: 'home', component: AdminHomeComponent}, {path: 'users', component: AdminUsersComponent}, // ... ]; ``` ### Lazy loading and default exports When using `loadChildren` and `loadComponent`, the router understands and automatically unwraps dynamic `import()` calls with `default` exports. You can take advantage of this to skip the `.then()` for such lazy loading operations. ``` // In the main application: export const ROUTES: Route[] = [ {path: 'admin', loadChildren: () => import('./admin/routes')}, // ... ]; // In admin/routes.ts: export default [ {path: 'home', component: AdminHomeComponent}, {path: 'users', component: AdminUsersComponent}, // ... ] as Route[]; ``` ### Providing services to a subset of routes The lazy loading API for `[NgModule](../api/core/ngmodule)`s (`loadChildren`) creates a new "module" injector when it loads the lazily loaded children of a route. This feature was often useful to provide services only to a subset of routes in the application. For example, if all routes under `/admin` were scoped using a `loadChildren` boundary, then admin-only services could be provided only to those routes. Doing this required using the `loadChildren` API, even if lazy loading of the routes in question was unnecessary. The Router now supports explicitly specifying additional `providers` on a `[Route](../api/router/route)`, which allows this same scoping without the need for either lazy loading or `[NgModule](../api/core/ngmodule)`s. For example, scoped services within an `/admin` route structure would look like: ``` export const ROUTES: Route[] = [ { path: 'admin', providers: [ AdminService, {provide: ADMIN_API_KEY, useValue: '12345'}, ], children: [ {path: 'users', component: AdminUsersComponent}, {path: 'teams', component: AdminTeamsComponent}, ], }, // ... other application routes that don't // have access to ADMIN_API_KEY or AdminService. ]; ``` It's also possible to combine `providers` with `loadChildren` of additional routing configuration, to achieve the same effect of lazy loading an `[NgModule](../api/core/ngmodule)` with additional routes and route-level providers. This example configures the same providers/child routes as above, but behind a lazy loaded boundary: ``` // Main application: export const ROUTES: Route[] = { // Lazy-load the admin routes. {path: 'admin', loadChildren: () => import('./admin/routes').then(mod => mod.ADMIN_ROUTES)}, // ... rest of the routes } // In admin/routes.ts: export const ADMIN_ROUTES: Route[] = [{ path: '', pathMatch: 'prefix', providers: [ AdminService, {provide: ADMIN_API_KEY, useValue: 12345}, ], children: [ {path: 'users', component: AdminUsersCmp}, {path: 'teams', component: AdminTeamsCmp}, ], }]; ``` Note the use of an empty-path route to host `providers` that are shared among all the child routes. `[importProvidersFrom](../api/core/importprovidersfrom)` can be used to import existing NgModule-based DI configuration into route `providers` as well. Advanced topics --------------- This section goes into more details that are relevant only to more advanced usage patterns. You can safely skip this section when learning about standalone components, directives, and pipes for the first time. ### Standalone components for library authors Standalone components, directives, and pipes can be exported from `[NgModule](../api/core/ngmodule)`s that import them: ``` @NgModule({ imports: [ImageCarouselComponent, ImageSlideComponent], exports: [ImageCarouselComponent, ImageSlideComponent], }) export class CarouselModule {} ``` This pattern is useful for Angular libraries that publish a set of cooperating directives. In the above example, both the `ImageCarouselComponent` and `ImageSlideComponent` need to be present in a template to build up one logical "carousel widget". As an alternative to publishing a `[NgModule](../api/core/ngmodule)`, library authors might want to export an array of cooperating directives: ``` export const CAROUSEL_DIRECTIVES = [ImageCarouselComponent, ImageSlideComponent] as const; ``` Such an array could be imported by applications using `[NgModule](../api/core/ngmodule)`s and added to the `@[NgModule.imports](../api/core/ngmodule#imports)`. Please note the presence of the TypeScript’s `as const` construct: it gives Angular compiler additional information required for proper compilation and is a recommended practice (as it makes the exported array immutable from the TypeScript point of view). ### Dependency injection and injectors hierarchy Angular applications can configure dependency injection by specifying a set of available providers. In a typical application, there are two different injector types: * **module injector** with providers configured in `@[NgModule.providers](../api/core/ngmodule#providers)` or `@[Injectable](../api/core/injectable)({providedIn: "..."})`. Those application-wide providers are visible to all components in as well as to other services configured in a module injector. * **node injectors** configured in `@[Directive.providers](../api/core/directive#providers)` / `@Component.providers` or `@[Component.viewProviders](../api/core/component#viewProviders)`. Those providers are visible to a given component and all its children only. #### Environment injectors Making `[NgModule](../api/core/ngmodule)`s optional will require new ways of configuring "module" injectors with application-wide providers (for example, [HttpClient](../api/common/http/httpclient)). In the standalone application (one created with `[bootstrapApplication](../api/platform-browser/bootstrapapplication)`), “module” providers can be configured during the bootstrap process, in the `providers` option: ``` bootstrapApplication(PhotoAppComponent, { providers: [ {provide: BACKEND_URL, useValue: 'https://photoapp.looknongmodules.com/api'}, {provide: PhotosService, useClass: PhotosService}, // ... ] }); ``` The new bootstrap API gives us back the means of configuring “module injectors” without using `[NgModule](../api/core/ngmodule)`s. In this sense, the “module” part of the name is no longer relevant and we’ve decided to introduce a new term: “environment injectors”. Environment injectors can be configured using one of the following: * `@[NgModule.providers](../api/core/ngmodule#providers)` (in applications bootstrapping through an `[NgModule](../api/core/ngmodule)`); * `@[Injectable](../api/core/injectable)({provideIn: "..."})`(in both the NgModule-based and the “standalone” applications); * `providers` option in the `[bootstrapApplication](../api/platform-browser/bootstrapapplication)` call (in fully “standalone” applications); * `providers` field in a `[Route](../api/router/route)` configuration. Angular v14 introduces a new TypeScript type `[EnvironmentInjector](../api/core/environmentinjector)` to represent this new naming. The accompanying `[createEnvironmentInjector](../api/core/createenvironmentinjector)` API makes it possible to create environment injectors programmatically: ``` import {createEnvironmentInjector} from '@angular/core'; const parentInjector = … // existing environment injector const childInjector = createEnvironmentInjector([{provide: PhotosService, useClass: CustomPhotosService}], parentInjector); ``` Environment injectors have one additional capability: they can execute initialization logic when an environment injector gets created (similar to the `[NgModule](../api/core/ngmodule)` constructors that get executed when a module injector is created): ``` import {createEnvironmentInjector, ENVIRONMENT_INITIALIZER} from '@angular/core'; createEnvironmentInjector([ {provide: PhotosService, useClass: CustomPhotosService}, {provide: ENVIRONMENT_INITIALIZER, useValue: () => { console.log("This function runs when this EnvironmentInjector gets created"); }} ]); ``` #### Standalone injectors In reality, the dependency injectors hierarchy is slightly more elaborate in applications using standalone components. Let’s consider the following example: ``` // an existing "datepicker" component with an NgModule @Component({         selector: 'datepicker',         template: '...', }) class DatePickerComponent { constructor(private calendar: CalendarService) {} } @NgModule({         declarations: [DatePickerComponent],         exports: [DatePickerComponent]         providers: [CalendarService], }) class DatePickerModule { } @Component({         selector: 'date-modal',         template: '<datepicker></datepicker>',         standalone: true,         imports: [DatePickerModule] }) class DateModalComponent { } ``` In the above example, the component `DateModalComponent` is standalone - it can be consumed directly and has no NgModule which needs to be imported in order to use it. However, `DateModalComponent` has a dependency, the `DatePickerComponent,` which is imported via its NgModule (the `DatePickerModule`). This NgModule may declare providers (in this case: `CalendarService`) which are required for the `DatePickerComponent` to function correctly. When Angular creates a standalone component, it needs to know that the current injector has all of the necessary services for the standalone component's dependencies, including those based on NgModules. To guarantee that, in some cases Angular will create a new "standalone injector" as a child of the current environment injector. Today, this happens for all bootstrapped standalone components: it will be a child of the root environment injector. The same rule applies to the dynamically created (for example, by the router or the `[ViewContainerRef](../api/core/viewcontainerref)` API) standalone components. A separate standalone injector is created to ensure that providers imported by a standalone component are “isolated” from the rest of the application. This lets us think of standalone components as truly self-contained pieces that can’t “leak” their implementation details to the rest of the application.
programming_docs
angular Template type checking Template type checking ====================== Overview of template type checking ---------------------------------- Just as TypeScript catches type errors in your code, Angular checks the expressions and bindings within the templates of your application and can report any type errors it finds. Angular currently has three modes of doing this, depending on the value of the `fullTemplateTypeCheck` and `strictTemplates` flags in the [TypeScript configuration file](typescript-configuration). ### Basic mode In the most basic type-checking mode, with the `fullTemplateTypeCheck` flag set to `false`, Angular validates only top-level expressions in a template. If you write `<map [city]="user.address.city">`, the compiler verifies the following: * `user` is a property on the component class * `user` is an object with an address property * `user.address` is an object with a city property The compiler does not verify that the value of `user.address.city` is assignable to the city input of the `<map>` component. The compiler also has some major limitations in this mode: * Importantly, it doesn't check embedded views, such as `*[ngIf](../api/common/ngif)`, `*[ngFor](../api/common/ngfor)`, other `[<ng-template>](../api/core/ng-template)` embedded view. * It doesn't figure out the types of `#refs`, the results of pipes, or the type of `$event` in event bindings. In many cases, these things end up as type `any`, which can cause subsequent parts of the expression to go unchecked. ### Full mode If the `fullTemplateTypeCheck` flag is set to `true`, Angular is more aggressive in its type-checking within templates. In particular: * Embedded views (such as those within an `*[ngIf](../api/common/ngif)` or `*[ngFor](../api/common/ngfor)`) are checked * Pipes have the correct return type * Local references to directives and pipes have the correct type (except for any generic parameters, which will be `any`) The following still have type `any`. * Local references to DOM elements * The `$event` object * Safe navigation expressions > The `fullTemplateTypeCheck` flag has been deprecated in Angular 13. The `strictTemplates` family of compiler options should be used instead. > > ### Strict mode Angular maintains the behavior of the `fullTemplateTypeCheck` flag, and introduces a third "strict mode". Strict mode is a superset of full mode, and is accessed by setting the `strictTemplates` flag to true. This flag supersedes the `fullTemplateTypeCheck` flag. In strict mode, Angular uses checks that go beyond the version 8 type-checker. > **NOTE**: Strict mode is only available if using Ivy. > > In addition to the full mode behavior, Angular does the following: * Verifies that component/directive bindings are assignable to their `@[Input](../api/core/input)()`s * Obeys TypeScript's `strictNullChecks` flag when validating the preceding mode * Infers the correct type of components/directives, including generics * Infers template context types where configured (for example, allowing correct type-checking of `[NgFor](../api/common/ngfor)`) * Infers the correct type of `$event` in component/directive, DOM, and animation event bindings * Infers the correct type of local references to DOM elements, based on the tag name (for example, the type that `document.createElement` would return for that tag) Checking of `*[ngFor](../api/common/ngfor)` ------------------------------------------- The three modes of type-checking treat embedded views differently. Consider the following example. ``` interface User { name: string; address: { city: string; state: string; } } ``` ``` <div *ngFor="let user of users"> <h2>{{config.title}}</h2> <span>City: {{user.address.city}}</span> </div> ``` The `<h2>` and the `<span>` are in the `*[ngFor](../api/common/ngfor)` embedded view. In basic mode, Angular doesn't check either of them. However, in full mode, Angular checks that `config` and `user` exist and assumes a type of `any`. In strict mode, Angular knows that the `user` in the `<span>` has a type of `User`, and that `address` is an object with a `city` property of type `string`. Troubleshooting template errors ------------------------------- With strict mode, you might encounter template errors that didn't arise in either of the previous modes. These errors often represent genuine type mismatches in the templates that were not caught by the previous tooling. If this is the case, the error message should make it clear where in the template the problem occurs. There can also be false positives when the typings of an Angular library are either incomplete or incorrect, or when the typings don't quite line up with expectations as in the following cases. * When a library's typings are wrong or incomplete (for example, missing `null | undefined` if the library was not written with `strictNullChecks` in mind) * When a library's input types are too narrow and the library hasn't added appropriate metadata for Angular to figure this out. This usually occurs with disabled or other common Boolean inputs used as attributes, for example, `<input disabled>`. * When using `$event.target` for DOM events (because of the possibility of event bubbling, `$event.target` in the DOM typings doesn't have the type you might expect) In case of a false positive like these, there are a few options: * Use the [`$any()` type-cast function](template-expression-operators#any-type-cast-function) in certain contexts to opt out of type-checking for a part of the expression * Disable strict checks entirely by setting `strictTemplates: false` in the application's TypeScript configuration file, `tsconfig.json` * Disable certain type-checking operations individually, while maintaining strictness in other aspects, by setting a *strictness flag* to `false` * If you want to use `strictTemplates` and `strictNullChecks` together, opt out of strict null type checking specifically for input bindings using `strictNullInputTypes` Unless otherwise commented, each following option is set to the value for `strictTemplates` (`true` when `strictTemplates` is `true` and conversely, the other way around). | Strictness flag | Effect | | --- | --- | | `strictInputTypes` | Whether the assignability of a binding expression to the `@[Input](../api/core/input)()` field is checked. Also affects the inference of directive generic types. | | `strictInputAccessModifiers` | Whether access modifiers such as `private`/`protected`/`readonly` are honored when assigning a binding expression to an `@[Input](../api/core/input)()`. If disabled, the access modifiers of the `@[Input](../api/core/input)` are ignored; only the type is checked. This option is `false` by default, even with `strictTemplates` set to `true`. | | `strictNullInputTypes` | Whether `strictNullChecks` is honored when checking `@[Input](../api/core/input)()` bindings (per `strictInputTypes`). Turning this off can be useful when using a library that was not built with `strictNullChecks` in mind. | | `strictAttributeTypes` | Whether to check `@[Input](../api/core/input)()` bindings that are made using text attributes. For example, ``` <input matInput disabled="true"> ``` (setting the `disabled` property to the string `'true'`) vs ``` <input matInput [disabled]="true"> ``` (setting the `disabled` property to the boolean `true`). | | `strictSafeNavigationTypes` | Whether the return type of safe navigation operations (for example, `user?.name` will be correctly inferred based on the type of `user`). If disabled, `user?.name` will be of type `any`. | | `strictDomLocalRefTypes` | Whether local references to DOM elements will have the correct type. If disabled `ref` will be of type `any` for `<input #ref>`. | | `strictOutputEventTypes` | Whether `$event` will have the correct type for event bindings to component/directive an `@[Output](../api/core/output)()`, or to animation events. If disabled, it will be `any`. | | `strictDomEventTypes` | Whether `$event` will have the correct type for event bindings to DOM events. If disabled, it will be `any`. | | `strictContextGenerics` | Whether the type parameters of generic components will be inferred correctly (including any generic bounds). If disabled, any type parameters will be `any`. | | `strictLiteralTypes` | Whether object and array literals declared in the template will have their type inferred. If disabled, the type of such literals will be `any`. This flag is `true` when *either* `fullTemplateTypeCheck` or `strictTemplates` is set to `true`. | If you still have issues after troubleshooting with these flags, fall back to full mode by disabling `strictTemplates`. If that doesn't work, an option of last resort is to turn off full mode entirely with `fullTemplateTypeCheck: false`. A type-checking error that you cannot resolve with any of the recommended methods can be the result of a bug in the template type-checker itself. If you get errors that require falling back to basic mode, it is likely to be such a bug. If this happens, [file an issue](https://github.com/angular/angular/issues) so the team can address it. Inputs and type-checking ------------------------ The template type checker checks whether a binding expression's type is compatible with that of the corresponding directive input. As an example, consider the following component: ``` export interface User { name: string; } @Component({ selector: 'user-detail', template: '{{ user.name }}', }) export class UserDetailComponent { @Input() user: User; } ``` The `AppComponent` template uses this component as follows: ``` @Component({ selector: 'app-root', template: '<user-detail [user]="selectedUser"></user-detail>', }) export class AppComponent { selectedUser: User | null = null; } ``` Here, during type checking of the template for `AppComponent`, the `[user]="selectedUser"` binding corresponds with the `UserDetailComponent.user` input. Therefore, Angular assigns the `selectedUser` property to `UserDetailComponent.user`, which would result in an error if their types were incompatible. TypeScript checks the assignment according to its type system, obeying flags such as `strictNullChecks` as they are configured in the application. Avoid run-time type errors by providing more specific in-template type requirements to the template type checker. Make the input type requirements for your own directives as specific as possible by providing template-guard functions in the directive definition. See [Improving template type checking for custom directives](structural-directives#directive-type-checks) in this guide. ### Strict null checks When you enable `strictTemplates` and the TypeScript flag `strictNullChecks`, typecheck errors might occur for certain situations that might not easily be avoided. For example: * A nullable value that is bound to a directive from a library which did not have `strictNullChecks` enabled. For a library compiled without `strictNullChecks`, its declaration files will not indicate whether a field can be `null` or not. For situations where the library handles `null` correctly, this is problematic, as the compiler will check a nullable value against the declaration files which omit the `null` type. As such, the compiler produces a type-check error because it adheres to `strictNullChecks`. * Using the `[async](../api/common/asyncpipe)` pipe with an Observable which you know will emit synchronously. The `[async](../api/common/asyncpipe)` pipe currently assumes that the Observable it subscribes to can be asynchronous, which means that it's possible that there is no value available yet. In that case, it still has to return something —which is `null`. In other words, the return type of the `[async](../api/common/asyncpipe)` pipe includes `null`, which might result in errors in situations where the Observable is known to emit a non-nullable value synchronously. There are two potential workarounds to the preceding issues: * In the template, include the non-null assertion operator `!` at the end of a nullable expression, such as ``` <user-detail [user]="user!"></user-detail> ``` In this example, the compiler disregards type incompatibilities in nullability, just as in TypeScript code. In the case of the `[async](../api/common/asyncpipe)` pipe, notice that the expression needs to be wrapped in parentheses, as in ``` <user-detail [user]="(user$ | async)!"></user-detail> ``` * Disable strict null checks in Angular templates completely. When `strictTemplates` is enabled, it is still possible to disable certain aspects of type checking. Setting the option `strictNullInputTypes` to `false` disables strict null checks within Angular templates. This flag applies for all components that are part of the application. ### Advice for library authors As a library author, you can take several measures to provide an optimal experience for your users. First, enabling `strictNullChecks` and including `null` in an input's type, as appropriate, communicates to your consumers whether they can provide a nullable value or not. Additionally, it is possible to provide type hints that are specific to the template type checker. See [Improving template type checking for custom directives](structural-directives#directive-type-checks), and [Input setter coercion](template-typecheck#input-setter-coercion). Input setter coercion --------------------- Occasionally it is desirable for the `@[Input](../api/core/input)()` of a directive or component to alter the value bound to it, typically using a getter/setter pair for the input. As an example, consider this custom button component: Consider the following directive: ``` @Component({ selector: 'submit-button', template: ` <div class="wrapper"> <button [disabled]="disabled">Submit</button> </div> `, }) class SubmitButton { private _disabled: boolean; @Input() get disabled(): boolean { return this._disabled; } set disabled(value: boolean) { this._disabled = value; } } ``` Here, the `disabled` input of the component is being passed on to the `<button>` in the template. All of this works as expected, as long as a `boolean` value is bound to the input. But, suppose a consumer uses this input in the template as an attribute: ``` <submit-button disabled></submit-button> ``` This has the same effect as the binding: ``` <submit-button [disabled]="''"></submit-button> ``` At runtime, the input will be set to the empty string, which is not a `boolean` value. Angular component libraries that deal with this problem often "coerce" the value into the right type in the setter: ``` set disabled(value: boolean) { this._disabled = (value === '') || value; } ``` It would be ideal to change the type of `value` here, from `boolean` to `boolean|''`, to match the set of values which are actually accepted by the setter. TypeScript prior to version 4.3 requires that both the getter and setter have the same type, so if the getter should return a `boolean` then the setter is stuck with the narrower type. If the consumer has Angular's strictest type checking for templates enabled, this creates a problem: the empty string (`''`) is not actually assignable to the `disabled` field, which creates a type error when the attribute form is used. As a workaround for this problem, Angular supports checking a wider, more permissive type for `@[Input](../api/core/input)()` than is declared for the input field itself. Enable this by adding a static property with the `ngAcceptInputType_` prefix to the component class: ``` class SubmitButton { private _disabled: boolean; @Input() get disabled(): boolean { return this._disabled; } set disabled(value: boolean) { this._disabled = (value === '') || value; } static ngAcceptInputType_disabled: boolean|''; } ``` > Since TypeScript 4.3, the setter could have been declared to accept `boolean|''` as type, making the input setter coercion field obsolete. As such, input setters coercion fields have been deprecated. > > This field does not need to have a value. Its existence communicates to the Angular type checker that the `disabled` input should be considered as accepting bindings that match the type `boolean|''`. The suffix should be the `@[Input](../api/core/input)` *field* name. Care should be taken that if an `ngAcceptInputType_` override is present for a given input, then the setter should be able to handle any values of the overridden type. Disabling type checking using `$any()` -------------------------------------- Disable checking of a binding expression by surrounding the expression in a call to the [`$any()` cast pseudo-function](template-expression-operators). The compiler treats it as a cast to the `any` type just like in TypeScript when a `<any>` or `as any` cast is used. In the following example, casting `person` to the `any` type suppresses the error `Property address does not exist`. ``` @Component({ selector: 'my-component', template: '{{$any(person).address.street}}' }) class MyComponent { person?: Person; } ``` Last reviewed on Mon Feb 28 2022 angular Dynamic component loader Dynamic component loader ======================== Component templates are not always fixed. An application might need to load new components at runtime. This cookbook shows you how to add components dynamically. See the of the code in this cookbook. Dynamic component loading ------------------------- The following example shows how to build a dynamic ad banner. The hero agency is planning an ad campaign with several different ads cycling through the banner. New ad components are added frequently by several different teams. This makes it impractical to use a template with a static component structure. Instead, you need a way to load a new component without a fixed reference to the component in the ad banner's template. Angular comes with its own API for loading components dynamically. The anchor directive -------------------- Before adding components, you have to define an anchor point to tell Angular where to insert components. The ad banner uses a helper directive called `AdDirective` to mark valid insertion points in the template. ``` import { Directive, ViewContainerRef } from '@angular/core'; @Directive({ selector: '[adHost]', }) export class AdDirective { constructor(public viewContainerRef: ViewContainerRef) { } } ``` `AdDirective` injects `[ViewContainerRef](../api/core/viewcontainerref)` to gain access to the view container of the element that will host the dynamically added component. In the `@[Directive](../api/core/directive)` decorator, notice the selector name, `adHost`; that's what you use to apply the directive to the element. The next section shows you how. Loading components ------------------ Most of the ad banner implementation is in `ad-banner.component.ts`. To keep things simple in this example, the HTML is in the `@[Component](../api/core/component)` decorator's `template` property as a template string. The `[<ng-template>](../api/core/ng-template)` element is where you apply the directive you just made. To apply the `AdDirective`, recall the selector from `ad.directive.ts`, `[adHost]`. Apply that to `[<ng-template>](../api/core/ng-template)` without the square brackets. Now Angular knows where to dynamically load components. ``` template: ` <div class="ad-banner-example"> <h3>Advertisements</h3> <ng-template adHost></ng-template> </div> ` ``` The `[<ng-template>](../api/core/ng-template)` element is a good choice for dynamic components because it doesn't render any additional output. Resolving components -------------------- Take a closer look at the methods in `ad-banner.component.ts`. `AdBannerComponent` takes an array of `AdItem` objects as input, which ultimately comes from `AdService`. `AdItem` objects specify the type of component to load and any data to bind to the component.`AdService` returns the actual ads making up the ad campaign. Passing an array of components to `AdBannerComponent` allows for a dynamic list of ads without static elements in the template. With its `getAds()` method, `AdBannerComponent` cycles through the array of `AdItems` and loads a new component every 3 seconds by calling `loadComponent()`. ``` export class AdBannerComponent implements OnInit, OnDestroy { @Input() ads: AdItem[] = []; currentAdIndex = -1; @ViewChild(AdDirective, {static: true}) adHost!: AdDirective; interval: number|undefined; ngOnInit(): void { this.loadComponent(); this.getAds(); } ngOnDestroy() { clearInterval(this.interval); } loadComponent() { this.currentAdIndex = (this.currentAdIndex + 1) % this.ads.length; const adItem = this.ads[this.currentAdIndex]; const viewContainerRef = this.adHost.viewContainerRef; viewContainerRef.clear(); const componentRef = viewContainerRef.createComponent<AdComponent>(adItem.component); componentRef.instance.data = adItem.data; } getAds() { this.interval = setInterval(() => { this.loadComponent(); }, 3000); } } ``` The `loadComponent()` method is doing a lot of the heavy lifting here. Take it step by step. First, it picks an ad. > **How `loadComponent()` chooses an ad** > > The `loadComponent()` method chooses an ad using some math. > > First, it sets the `currentAdIndex` by taking whatever it currently is plus one, dividing that by the length of the `AdItem` array, and using the *remainder* as the new `currentAdIndex` value. Then, it uses that value to select an `adItem` from the array. > > Next, you're targeting the `viewContainerRef` that exists on this specific instance of the component. How do you know it's this specific instance? Because it's referring to `adHost`, and `adHost` is the directive you set up earlier to tell Angular where to insert dynamic components. As you may recall, `AdDirective` injects `[ViewContainerRef](../api/core/viewcontainerref)` into its constructor. This is how the directive accesses the element that you want to use to host the dynamic component. To add the component to the template, you call `[createComponent](../api/core/createcomponent)()` on `[ViewContainerRef](../api/core/viewcontainerref)`. The `[createComponent](../api/core/createcomponent)()` method returns a reference to the loaded component. Use that reference to interact with the component by assigning to its properties or calling its methods. The `AdComponent` interface --------------------------- In the ad banner, all components implement a common `AdComponent` interface to standardize the API for passing data to the components. Here are two sample components and the `AdComponent` interface for reference: ``` import { Component, Input } from '@angular/core'; import { AdComponent } from './ad.component'; @Component({ template: ` <div class="job-ad"> <h4>{{data.headline}}</h4> {{data.body}} </div> ` }) export class HeroJobAdComponent implements AdComponent { @Input() data: any; } ``` ``` import { Component, Input } from '@angular/core'; import { AdComponent } from './ad.component'; @Component({ template: ` <div class="hero-profile"> <h3>Featured Hero Profile</h3> <h4>{{data.name}}</h4> <p>{{data.bio}}</p> <strong>Hire this hero today!</strong> </div> ` }) export class HeroProfileComponent implements AdComponent { @Input() data: any; } ``` ``` export interface AdComponent { data: any; } ``` Final ad banner --------------- The final ad banner looks like this: ![Ads](https://angular.io/generated/images/guide/dynamic-component-loader/ads-example.gif) See the . Last reviewed on Mon Feb 28 2022
programming_docs
angular Setup for upgrading from AngularJS Setup for upgrading from AngularJS ================================== > **AUDIENCE**: Use this guide **only** in the context of [Upgrading from AngularJS](upgrade "Upgrading from AngularJS to Angular") or [Upgrading for Performance](upgrade-performance "Upgrading for Performance"). Those Upgrade guides refer to this Setup guide for information about using the [deprecated QuickStart GitHub repository](https://github.com/angular/quickstart "Deprecated Angular QuickStart GitHub repository"), which was created prior to the current Angular [CLI](cli "CLI Overview"). > > **For all other scenarios**, see the current instructions in [Setting up the Local Environment and Workspace](setup-local "Setting up for Local Development"). > > This guide describes how to develop locally on your own machine. Setting up a new project on your machine is quick and easy with the [QuickStart seed on GitHub](https://github.com/angular/quickstart "Install the github QuickStart repo"). Prerequisites ------------- Make sure you have [Node.js® and npm installed](setup-local#prerequisites "Angular prerequisites"). Clone ----- Perform the *clone-to-launch* steps with these terminal commands. ``` git clone https://github.com/angular/quickstart.git quickstart cd quickstart npm install ``` Download -------- [Download the QuickStart seed](https://github.com/angular/quickstart/archive/master.zip "Download the QuickStart seed repository") and unzip it into your project folder. Then perform the remaining steps with these terminal commands. ``` cd quickstart npm install ``` Delete `non-essential` files (optional) --------------------------------------- You can quickly delete the *non-essential* files that concern testing and QuickStart repository maintenance (***including all git-related artifacts*** such as the `.git` folder and `.gitignore`). > Do this only in the beginning to avoid accidentally deleting your own tests and git setup. > > Open a terminal window in the project folder and enter the following commands for your environment: ### macOS / Mac OS X (bash) ``` xargs rm -rf < non-essential-files.osx.txt rm src/app/*.spec*.ts rm non-essential-files.osx.txt ``` ### Windows ``` for /f %i in (non-essential-files.txt) do del %i /F /S /Q rd .git /s /q rd e2e /s /q ``` Update dependency versions -------------------------- Since the quickstart repository is deprecated, it is no longer updated and you need some additional steps to use the latest Angular. 1. Remove the obsolete `@angular/[http](../api/common/http)` package (both from `package.json > dependencies` and `src/systemjs.config.js > SystemJS.config() > map`). 2. Install the latest versions of the Angular framework packages by running: ``` npm install --save @angular/common@latest @angular/compiler@latest @angular/core@latest @angular/forms@latest @angular/platform-browser@latest @angular/platform-browser-dynamic@latest @angular/router@latest ``` 3. Install the latest versions of other packages used by Angular (RxJS, TypeScript, Zone.js) by running: ``` npm install --save rxjs@latest zone.js@latest npm install --save-dev typescript@latest ``` 4. Install the `systemjs-plugin-babel` package. This will later be used to load the Angular framework files, which are in ES2015 format, using SystemJS. ``` npm install --save systemjs-plugin-babel@latest ``` 5. In order to be able to load the latest Angular framework packages (in ES2015 format) correctly, replace the relevant entries in `src/systemjs.config.js`: ``` System.config({ /* . . . */ map: { /* . . . */ '@angular/core': 'npm:@angular/core/fesm2015/core.mjs', '@angular/common': 'npm:@angular/common/fesm2015/common.mjs', '@angular/common/http': 'npm:@angular/common/fesm2015/http.mjs', '@angular/compiler': 'npm:@angular/compiler/fesm2015/compiler.mjs', '@angular/platform-browser': 'npm:@angular/platform-browser/fesm2015/platform-browser.mjs', '@angular/platform-browser-dynamic': 'npm:@angular/platform-browser-dynamic/fesm2015/platform-browser-dynamic.mjs', '@angular/router': 'npm:@angular/router/fesm2015/router.mjs', '@angular/router/upgrade': 'npm:@angular/router/fesm2015/upgrade.mjs', '@angular/forms': 'npm:@angular/forms/fesm2015/forms.mjs', /* . . . */ }, /* . . . */ }); ``` 6. In order to be able to load the latest RxJS package correctly, replace the relevant entries in `src/systemjs.config.js`: ``` System.config({ /* . . . */ map: { /* . . . */ 'rxjs': 'npm:rxjs/dist/cjs', 'rxjs/operators': 'npm:rxjs/dist/cjs/operators', /* . . . */ }, /* . . . */ packages: { /* . . . */ 'rxjs': { defaultExtension: 'js', format: 'cjs', main: 'index.js' }, 'rxjs/operators': { defaultExtension: 'js', format: 'cjs', main: 'index.js' }, /* . . . */ } }); ``` 7. In order to be able to load the `tslib` package (which is required for files transpiled by TypeScript), add the following entry to `src/systemjs.config.js`: ``` System.config({ /* . . . */ map: { /* . . . */ 'tslib': 'npm:tslib/tslib.js', /* . . . */ }, /* . . . */ }); ``` 8. In order for SystemJS to be able to load the ES2015 Angular files correctly, add the following entries to `src/systemjs.config.js`: ``` System.config({ /* . . . */ map: { /* . . . */ 'plugin-babel': 'npm:systemjs-plugin-babel/plugin-babel.js', 'systemjs-babel-build': 'npm:systemjs-plugin-babel/systemjs-babel-browser.js' }, transpiler: 'plugin-babel', /* . . . */ packages: { /* . . . */ 'meta': { '*.mjs': { babelOptions: { es2015: false } } } } }); ``` 9. Finally, in order to prevent TypeScript typecheck errors for dependencies, add the following entry to `src/tsconfig.json`: ``` { "compilerOptions": { "skipLibCheck": true, // … } } ``` With that, you can now run `npm start` and have the application built and served. Once built, the application will be automatically opened in a new browser tab and it will be automatically reloaded when you make changes to the source code. What's in the QuickStart seed? ------------------------------ The **QuickStart seed** provides a basic QuickStart playground application and other files necessary for local development. Consequently, there are many files in the project folder on your machine, most of which you can [learn about later](file-structure). > **Reminder:** The "QuickStart seed" example was created prior to the Angular CLI, so there are some differences between what is described here and an Angular CLI application. > > Focus on the following three TypeScript (`.ts`) files in the `/src` folder. ``` src app app.component.ts app.module.ts main.ts ``` ``` import { Component } from '@angular/core'; @Component({ selector: 'app-root', template: '<h1>Hello {{name}}</h1>' }) export class AppComponent { name = 'Angular'; } ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule) .catch(err => console.error(err)); ``` All guides and cookbooks have *at least these core files*. Each file has a distinct purpose and evolves independently as the application grows. Files outside `src/` concern building, deploying, and testing your application. They include configuration files and external dependencies. Files inside `src/` "belong" to your application. Add new Typescript, HTML and CSS files inside the `src/` directory, most of them inside `src/app`, unless told to do otherwise. The following are all in `src/` | File | Purpose | | --- | --- | | app/app.component.ts | Defines the same `AppComponent` as the one in the QuickStart playground. It is the **root** component of what will become a tree of nested components as the application evolves. | | app/app.module.ts | Defines `AppModule`, the [root module](bootstrapping "AppModule: the root module") that tells Angular how to assemble the application. When initially created, it declares only the `AppComponent`. Over time, you add more components to declare. | | main.ts | Compiles the application with the [JIT compiler](glossary#jit) and [bootstraps](bootstrapping) the application's main module (`AppModule`) to run in the browser. The JIT compiler is a reasonable choice during the development of most projects and it's the only viable choice for a sample running in a *live-coding* environment such as Stackblitz. Alternative [compilation](aot-compiler), <build>, and <deployment> options are available. | Appendix: Test using `[fakeAsync](../api/core/testing/fakeasync)()/[waitForAsync](../api/core/testing/waitforasync)()` ---------------------------------------------------------------------------------------------------------------------- If you use the `[fakeAsync](../api/core/testing/fakeasync)()` or `[waitForAsync](../api/core/testing/waitforasync)()` helper functions to run unit tests (for details, read the [Testing guide](testing-components-scenarios#fake-async)), you need to import `zone.js/testing` in your test setup file. > If you create project with `Angular/CLI`, it is already imported in `src/test.ts`. > > And in the earlier versions of `Angular`, the following files were imported or added in your html file: ``` import 'zone.js/plugins/long-stack-trace-zone'; import 'zone.js/plugins/proxy'; import 'zone.js/plugins/sync-test'; import 'zone.js/plugins/jasmine-patch'; import 'zone.js/plugins/async-test'; import 'zone.js/plugins/fake-async-test'; ``` You can still load those files separately, but the order is important, you must import `proxy` before `sync-test`, `async-test`, `fake-async-test` and `jasmine-patch`. And you also need to import `sync-test` before `jasmine-patch`, so it is recommended to just import `zone-testing` instead of loading those separated files. Last reviewed on Mon Feb 28 2022 angular Angular coding style guide Angular coding style guide ========================== Looking for an opinionated guide to Angular syntax, conventions, and application structure? Step right in. This style guide presents preferred conventions and, as importantly, explains why. Style vocabulary ---------------- Each guideline describes either a good or bad practice, and all have a consistent presentation. The wording of each guideline indicates how strong the recommendation is. **Do** is one that should always be followed. *Always* might be a bit too strong of a word. Guidelines that literally should always be followed are extremely rare. On the other hand, you need a really unusual case for breaking a *Do* guideline. **Consider** guidelines should generally be followed. If you fully understand the meaning behind the guideline and have a good reason to deviate, then do so. Aim to be consistent. **Avoid** indicates something you should almost never do. Code examples to *avoid* have an unmistakable red header. **Why**? Gives reasons for following the previous recommendations. File structure conventions -------------------------- Some code examples display a file that has one or more similarly named companion files. For example, `hero.component.ts` and `hero.component.html`. The guideline uses the shortcut `hero.component.ts|html|css|spec` to represent those various files. Using this shortcut makes this guide's file structures easier to read and more terse. Single responsibility --------------------- Apply the [*single responsibility principle (SRP)*](https://wikipedia.org/wiki/Single_responsibility_principle) to all components, services, and other symbols. This helps make the application cleaner, easier to read and maintain, and more testable. ### Rule of One #### Style 01-01 **Do** define one thing, such as a service or component, per file. **Consider** limiting files to 400 lines of code. **Why**? One component per file makes it far easier to read, maintain, and avoid collisions with teams in source control. **Why**? One component per file avoids hidden bugs that often arise when combining components in a file where they may share variables, create unwanted closures, or unwanted coupling with dependencies. **Why**? A single component can be the default export for its file which facilitates lazy loading with the router. The key is to make the code more reusable, easier to read, and less mistake-prone. The following *negative* example defines the `AppComponent`, bootstraps the app, defines the `Hero` model object, and loads heroes from the server all in the same file. *Don't do this*. ``` /* avoid */ import { Component, NgModule, OnInit } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; interface Hero { id: number; name: string; } @Component({ selector: 'app-root', template: ` <h1>{{title}}</h1> <pre>{{heroes | json}}</pre> `, styleUrls: ['app/app.component.css'] }) class AppComponent implements OnInit { title = 'Tour of Heroes'; heroes: Hero[] = []; ngOnInit() { getHeroes().then(heroes => (this.heroes = heroes)); } } @NgModule({ imports: [BrowserModule], declarations: [AppComponent], exports: [AppComponent], bootstrap: [AppComponent] }) export class AppModule {} platformBrowserDynamic().bootstrapModule(AppModule); const HEROES: Hero[] = [ { id: 1, name: 'Bombasto' }, { id: 2, name: 'Tornado' }, { id: 3, name: 'Magneta' } ]; function getHeroes(): Promise<Hero[]> { return Promise.resolve(HEROES); // TODO: get hero data from the server; } ``` It is a better practice to redistribute the component and its supporting classes into their own, dedicated files. ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule); ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { RouterModule } from '@angular/router'; import { AppComponent } from './app.component'; import { HeroesComponent } from './heroes/heroes.component'; @NgModule({ imports: [ BrowserModule, ], declarations: [ AppComponent, HeroesComponent ], exports: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` import { Component } from '@angular/core'; import { HeroService } from './heroes'; @Component({ selector: 'toh-app', template: ` <toh-heroes></toh-heroes> `, styleUrls: ['./app.component.css'], providers: [HeroService] }) export class AppComponent {} ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero, HeroService } from './shared'; @Component({ selector: 'toh-heroes', template: ` <pre>{{heroes | json}}</pre> ` }) export class HeroesComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) {} ngOnInit() { this.heroService.getHeroes() .then(heroes => this.heroes = heroes); } } ``` ``` import { Injectable } from '@angular/core'; import { HEROES } from './mock-heroes'; @Injectable() export class HeroService { getHeroes() { return Promise.resolve(HEROES); } } ``` ``` export interface Hero { id: number; name: string; } ``` ``` import { Hero } from './hero.model'; export const HEROES: Hero[] = [ { id: 1, name: 'Bombasto' }, { id: 2, name: 'Tornado' }, { id: 3, name: 'Magneta' } ]; ``` As the application grows, this rule becomes even more important. [Back to top](styleguide#toc) ### Small functions #### Style 01-02 **Do** define small functions **Consider** limiting to no more than 75 lines. **Why**? Small functions are easier to test, especially when they do one thing and serve one purpose. **Why**? Small functions promote reuse. **Why**? Small functions are easier to read. **Why**? Small functions are easier to maintain. **Why**? Small functions help avoid hidden bugs that come with large functions that share variables with external scope, create unwanted closures, or unwanted coupling with dependencies. [Back to top](styleguide#toc) Naming ------ Naming conventions are hugely important to maintainability and readability. This guide recommends naming conventions for the file name and the symbol name. ### General Naming Guidelines #### Style 02-01 **Do** use consistent names for all symbols. **Do** follow a pattern that describes the symbol's feature then its type. The recommended pattern is `feature.type.ts`. **Why**? Naming conventions help provide a consistent way to find content at a glance. Consistency within the project is vital. Consistency with a team is important. Consistency across a company provides tremendous efficiency. **Why**? The naming conventions should help find desired code faster and make it easier to understand. **Why**? Names of folders and files should clearly convey their intent. For example, `app/heroes/hero-list.component.ts` may contain a component that manages a list of heroes. [Back to top](styleguide#toc) ### Separate file names with dots and dashes #### Style 02-02 **Do** use dashes to separate words in the descriptive name. **Do** use dots to separate the descriptive name from the type. **Do** use consistent type names for all components following a pattern that describes the component's feature then its type. A recommended pattern is `feature.type.ts`. **Do** use conventional type names including `.service`, `.component`, `.pipe`, `.module`, and `.directive`. Invent additional type names if you must but take care not to create too many. **Why**? Type names provide a consistent way to quickly identify what is in the file. **Why**? Type names make it easy to find a specific file type using an editor or IDE's fuzzy search techniques. **Why**? Unabbreviated type names such as `.service` are descriptive and unambiguous. Abbreviations such as `.srv`, `.svc`, and `.serv` can be confusing. **Why**? Type names provide pattern matching for any automated tasks. [Back to top](styleguide#toc) ### Symbols and file names #### Style 02-03 **Do** use consistent names for all assets named after what they represent. **Do** use upper camel case for class names. **Do** match the name of the symbol to the name of the file. **Do** append the symbol name with the conventional suffix (such as `[Component](../api/core/component)`, `[Directive](../api/core/directive)`, `Module`, `[Pipe](../api/core/pipe)`, or `Service`) for a thing of that type. **Do** give the filename the conventional suffix (such as `.component.ts`, `.directive.ts`, `.module.ts`, `.pipe.ts`, or `.service.ts`) for a file of that type. **Why**? Consistent conventions make it easy to quickly identify and reference assets of different types. | Symbol name | File name | | --- | --- | | ``` @Component({ … }) export class AppComponent { } ``` | app.component.ts | | ``` @Component({ … }) export class HeroesComponent { } ``` | heroes.component.ts | | ``` @Component({ … }) export class HeroListComponent { } ``` | hero-list.component.ts | | ``` @Component({ … }) export class HeroDetailComponent { } ``` | hero-detail.component.ts | | ``` @Directive({ … }) export class ValidationDirective { } ``` | validation.directive.ts | | ``` @NgModule({ … }) export class AppModule ``` | app.module.ts | | ``` @Pipe({ name: 'initCaps' }) export class InitCapsPipe implements PipeTransform { } ``` | init-caps.pipe.ts | | ``` @Injectable() export class UserProfileService { } ``` | user-profile.service.ts | [Back to top](styleguide#toc) ### Service names #### Style 02-04 **Do** use consistent names for all services named after their feature. **Do** suffix a service class name with `Service`. For example, something that gets data or heroes should be called a `DataService` or a `HeroService`. A few terms are unambiguously services. They typically indicate agency by ending in "-er". You may prefer to name a service that logs messages `Logger` rather than `LoggerService`. Decide if this exception is agreeable in your project. As always, strive for consistency. **Why**? Provides a consistent way to quickly identify and reference services. **Why**? Clear service names such as `Logger` do not require a suffix. **Why**? Service names such as `Credit` are nouns and require a suffix and should be named with a suffix when it is not obvious if it is a service or something else. | Symbol name | File name | | --- | --- | | ``` @Injectable() export class HeroDataService { } ``` | hero-data.service.ts | | ``` @Injectable() export class CreditService { } ``` | credit.service.ts | | ``` @Injectable() export class Logger { } ``` | logger.service.ts | [Back to top](styleguide#toc) ### Bootstrapping #### Style 02-05 **Do** put bootstrapping and platform logic for the application in a file named `main.ts`. **Do** include error handling in the bootstrapping logic. **Avoid** putting application logic in `main.ts`. Instead, consider placing it in a component or service. **Why**? Follows a consistent convention for the startup logic of an app. **Why**? Follows a familiar convention from other technology platforms. ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule) .then(success => console.log(`Bootstrap success`)) .catch(err => console.error(err)); ``` [Back to top](styleguide#toc) ### Component selectors #### Style 05-02 **Do** use *dashed-case* or *kebab-case* for naming the element selectors of components. **Why**? Keeps the element names consistent with the specification for [Custom Elements](https://www.w3.org/TR/custom-elements). ``` /* avoid */ @Component({ selector: 'tohHeroButton', templateUrl: './hero-button.component.html' }) export class HeroButtonComponent {} ``` ``` @Component({ selector: 'toh-hero-button', templateUrl: './hero-button.component.html' }) export class HeroButtonComponent {} ``` ``` <toh-hero-button></toh-hero-button> ``` [Back to top](styleguide#toc) ### Component custom prefix #### Style 02-07 **Do** use a hyphenated, lowercase element selector value; for example, `admin-users`. **Do** use a custom prefix for a component selector. For example, the prefix `toh` represents **T**our **o**f **H**eroes and the prefix `admin` represents an admin feature area. **Do** use a prefix that identifies the feature area or the application itself. **Why**? Prevents element name collisions with components in other applications and with native HTML elements. **Why**? Makes it easier to promote and share the component in other applications. **Why**? Components are easy to identify in the DOM. ``` /* avoid */ // HeroComponent is in the Tour of Heroes feature @Component({ selector: 'hero' }) export class HeroComponent {} ``` ``` /* avoid */ // UsersComponent is in an Admin feature @Component({ selector: 'users' }) export class UsersComponent {} ``` ``` @Component({ selector: 'toh-hero' }) export class HeroComponent {} ``` ``` @Component({ selector: 'admin-users' }) export class UsersComponent {} ``` [Back to top](styleguide#toc) ### Directive selectors #### Style 02-06 **Do** Use lower camel case for naming the selectors of directives. **Why**? Keeps the names of the properties defined in the directives that are bound to the view consistent with the attribute names. **Why**? The Angular HTML parser is case-sensitive and recognizes lower camel case. [Back to top](styleguide#toc) ### Directive custom prefix #### Style 02-08 **Do** use a custom prefix for the selector of directives (for example, the prefix `toh` from **T**our **o**f **H**eroes). **Do** spell non-element selectors in lower camel case unless the selector is meant to match a native HTML attribute. **Don't** prefix a directive name with `ng` because that prefix is reserved for Angular and using it could cause bugs that are difficult to diagnose. **Why**? Prevents name collisions. **Why**? Directives are easily identified. ``` /* avoid */ @Directive({ selector: '[validate]' }) export class ValidateDirective {} ``` ``` @Directive({ selector: '[tohValidate]' }) export class ValidateDirective {} ``` [Back to top](styleguide#toc) ### Pipe names #### Style 02-09 **Do** use consistent names for all pipes, named after their feature. The pipe class name should use [UpperCamelCase](glossary#case-types) (the general convention for class names), and the corresponding `name` string should use *lowerCamelCase*. The `name` string cannot use hyphens ("dash-case" or "kebab-case"). **Why**? Provides a consistent way to quickly identify and reference pipes. | Symbol name | File name | | --- | --- | | ``` @Pipe({ name: 'ellipsis' }) export class EllipsisPipe implements PipeTransform { } ``` | ellipsis.pipe.ts | | ``` @Pipe({ name: 'initCaps' }) export class InitCapsPipe implements PipeTransform { } ``` | init-caps.pipe.ts | [Back to top](styleguide#toc) ### Unit test file names #### Style 02-10 **Do** name test specification files the same as the component they test. **Do** name test specification files with a suffix of `.spec`. **Why**? Provides a consistent way to quickly identify tests. **Why**? Provides pattern matching for [karma](https://karma-runner.github.io) or other test runners. | Test type | File names | | --- | --- | | Components | heroes.component.spec.ts hero-list.component.spec.ts hero-detail.component.spec.ts | | Services | logger.service.spec.ts hero.service.spec.ts filter-text.service.spec.ts | | Pipes | ellipsis.pipe.spec.ts init-caps.pipe.spec.ts | [Back to top](styleguide#toc) ### `End-to-End` (E2E) test file names #### Style 02-11 **Do** name end-to-end test specification files after the feature they test with a suffix of `.e2e-spec`. **Why**? Provides a consistent way to quickly identify end-to-end tests. **Why**? Provides pattern matching for test runners and build automation. | Test type | File names | | --- | --- | | End-to-End Tests | app.e2e-spec.ts heroes.e2e-spec.ts | [Back to top](styleguide#toc) ### Angular `[NgModule](../api/core/ngmodule)` names #### Style 02-12 **Do** append the symbol name with the suffix `Module`. **Do** give the file name the `.module.ts` extension. **Do** name the module after the feature and folder it resides in. **Why**? Provides a consistent way to quickly identify and reference modules. **Why**? Upper camel case is conventional for identifying objects that can be instantiated using a constructor. **Why**? Easily identifies the module as the root of the same named feature. **Do** suffix a `RoutingModule` class name with `RoutingModule`. **Do** end the filename of a `RoutingModule` with `-routing.module.ts`. **Why**? A `RoutingModule` is a module dedicated exclusively to configuring the Angular router. A consistent class and file name convention make these modules easy to spot and verify. | Symbol name | File name | | --- | --- | | ``` @NgModule({ … }) export class AppModule { } ``` | app.module.ts | | ``` @NgModule({ … }) export class HeroesModule { } ``` | heroes.module.ts | | ``` @NgModule({ … }) export class VillainsModule { } ``` | villains.module.ts | | ``` @NgModule({ … }) export class AppRoutingModule { } ``` | app-routing.module.ts | | ``` @NgModule({ … }) export class HeroesRoutingModule { } ``` | heroes-routing.module.ts | [Back to top](styleguide#toc) Application structure and NgModules ----------------------------------- Have a near-term view of implementation and a long-term vision. Start small but keep in mind where the application is heading. All of the application's code goes in a folder named `src`. All feature areas are in their own folder, with their own NgModule. All content is one asset per file. Each component, service, and pipe is in its own file. All third party vendor scripts are stored in another folder and not in the `src` folder. You didn't write them and you don't want them cluttering `src`. Use the naming conventions for files in this guide. [Back to top](styleguide#toc) ### `LIFT` #### Style 04-01 **Do** structure the application such that you can **L**ocate code quickly, **I**dentify the code at a glance, keep the **F**lattest structure you can, and **T**ry to be DRY. **Do** define the structure to follow these four basic guidelines, listed in order of importance. **Why**? LIFT provides a consistent structure that scales well, is modular, and makes it easier to increase developer efficiency by finding code quickly. To confirm your intuition about a particular structure, ask: *Can I quickly open and start work in all of the related files for this feature*? [Back to top](styleguide#toc) ### Locate #### Style 04-02 **Do** make locating code intuitive and fast. **Why**? To work efficiently you must be able to find files quickly, especially when you do not know (or do not remember) the file *names*. Keeping related files near each other in an intuitive location saves time. A descriptive folder structure makes a world of difference to you and the people who come after you. [Back to top](styleguide#toc) ### Identify #### Style 04-03 **Do** name the file such that you instantly know what it contains and represents. **Do** be descriptive with file names and keep the contents of the file to exactly one component. **Avoid** files with multiple components, multiple services, or a mixture. **Why**? Spend less time hunting and pecking for code, and become more efficient. Longer file names are far better than *short-but-obscure* abbreviated names. > It may be advantageous to deviate from the *one-thing-per-file* rule when you have a set of small, closely-related features that are better discovered and understood in a single file than as multiple files. Be wary of this loophole. > > [Back to top](styleguide#toc) ### Flat #### Style 04-04 **Do** keep a flat folder structure as long as possible. **Consider** creating sub-folders when a folder reaches seven or more files. **Consider** configuring the IDE to hide distracting, irrelevant files such as generated `.js` and `.js.map` files. **Why**? No one wants to search for a file through seven levels of folders. A flat structure is easy to scan. On the other hand, [psychologists believe](https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two) that humans start to struggle when the number of adjacent interesting things exceeds nine. So when a folder has ten or more files, it may be time to create subfolders. Base your decision on your comfort level. Use a flatter structure until there is an obvious value to creating a new folder. [Back to top](styleguide#toc) ### `T-DRY` (Try to be `DRY`) #### Style 04-05 **Do** be DRY (Don't Repeat Yourself). **Avoid** being so DRY that you sacrifice readability. **Why**? Being DRY is important, but not crucial if it sacrifices the other elements of LIFT. That's why it's called *T-DRY*. For example, it's redundant to name a template `hero-view.component.html` because with the `.html` extension, it is obviously a view. But if something is not obvious or departs from a convention, then spell it out. [Back to top](styleguide#toc) ### Overall structural guidelines #### Style 04-06 **Do** start small but keep in mind where the application is heading down the road. **Do** have a near term view of implementation and a long term vision. **Do** put all of the application's code in a folder named `src`. **Consider** creating a folder for a component when it has multiple accompanying files (`.ts`, `.html`, `.css`, and `.spec`). **Why**? Helps keep the application structure small and easy to maintain in the early stages, while being easy to evolve as the application grows. **Why**? Components often have four files (for example, `*.html`, `*.css`, `*.ts`, and `*.spec.ts`) and can clutter a folder quickly. Here is a compliant folder and file structure: ``` &lt;project root&gt; src app core exception.service.ts|spec.ts user-profile.service.ts|spec.ts heroes hero hero.component.ts|html|css|spec.ts hero-list hero-list.component.ts|html|css|spec.ts shared hero-button.component.ts|html|css|spec.ts hero.model.ts hero.service.ts|spec.ts heroes.component.ts|html|css|spec.ts heroes.module.ts heroes-routing.module.ts shared shared.module.ts init-caps.pipe.ts|spec.ts filter-text.component.ts|spec.ts filter-text.service.ts|spec.ts villains villain … villain-list … shared … villains.component.ts|html|css|spec.ts villains.module.ts villains-routing.module.ts app.component.ts|html|css|spec.ts app.module.ts app-routing.module.ts main.ts index.html … node_modules/… … ``` > While components in dedicated folders are widely preferred, another option for small applications is to keep components flat (not in a dedicated folder). This adds up to four files to the existing folder, but also reduces the folder nesting. Whatever you choose, be consistent. > > [Back to top](styleguide#toc) ### `Folders-by-feature` structure #### Style 04-07 **Do** create folders named for the feature area they represent. **Why**? A developer can locate the code and identify what each file represents at a glance. The structure is as flat as it can be and there are no repetitive or redundant names. **Why**? The LIFT guidelines are all covered. **Why**? Helps reduce the application from becoming cluttered through organizing the content and keeping them aligned with the LIFT guidelines. **Why**? When there are a lot of files, for example 10+, locating them is easier with a consistent folder structure and more difficult in a flat structure. **Do** create an NgModule for each feature area. **Why**? NgModules make it easy to lazy load routable features. **Why**? NgModules make it easier to isolate, test, and reuse features. For more information, refer to [this folder and file structure example](styleguide#file-tree). [Back to top](styleguide#toc) ### App `root module` #### Style 04-08 **Do** create an NgModule in the application's root folder, for example, in `/src/app`. **Why**? Every application requires at least one root NgModule. **Consider** naming the root module `app.module.ts`. **Why**? Makes it easier to locate and identify the root module. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; import { HeroesComponent } from './heroes/heroes.component'; @NgModule({ imports: [ BrowserModule, ], declarations: [ AppComponent, HeroesComponent ], exports: [ AppComponent ] }) export class AppModule {} ``` [Back to top](styleguide#toc) ### Feature modules #### Style 04-09 **Do** create an NgModule for all distinct features in an application; for example, a `Heroes` feature. **Do** place the feature module in the same named folder as the feature area; for example, in `app/heroes`. **Do** name the feature module file reflecting the name of the feature area and folder; for example, `app/heroes/heroes.module.ts`. **Do** name the feature module symbol reflecting the name of the feature area, folder, and file; for example, `app/heroes/heroes.module.ts` defines `HeroesModule`. **Why**? A feature module can expose or hide its implementation from other modules. **Why**? A feature module identifies distinct sets of related components that comprise the feature area. **Why**? A feature module can easily be routed to both eagerly and lazily. **Why**? A feature module defines clear boundaries between specific functionality and other application features. **Why**? A feature module helps clarify and make it easier to assign development responsibilities to different teams. **Why**? A feature module can easily be isolated for testing. [Back to top](styleguide#toc) ### Shared feature module #### Style 04-10 **Do** create a feature module named `SharedModule` in a `shared` folder; for example, `app/shared/shared.module.ts` defines `SharedModule`. **Do** declare components, directives, and pipes in a shared module when those items will be re-used and referenced by the components declared in other feature modules. **Consider** using the name SharedModule when the contents of a shared module are referenced across the entire application. **Consider** *not* providing services in shared modules. Services are usually singletons that are provided once for the entire application or in a particular feature module. There are exceptions, however. For example, in the sample code that follows, notice that the `SharedModule` provides `FilterTextService`. This is acceptable here because the service is stateless;that is, the consumers of the service aren't impacted by new instances. **Do** import all modules required by the assets in the `SharedModule`; for example, `[CommonModule](../api/common/commonmodule)` and `[FormsModule](../api/forms/formsmodule)`. **Why**? `SharedModule` will contain components, directives, and pipes that may need features from another common module; for example, `[ngFor](../api/common/ngfor)` in `[CommonModule](../api/common/commonmodule)`. **Do** declare all components, directives, and pipes in the `SharedModule`. **Do** export all symbols from the `SharedModule` that other feature modules need to use. **Why**? `SharedModule` exists to make commonly used components, directives, and pipes available for use in the templates of components in many other modules. **Avoid** specifying app-wide singleton providers in a `SharedModule`. Intentional singletons are OK. Take care. **Why**? A lazy loaded feature module that imports that shared module will make its own copy of the service and likely have undesirable results. **Why**? You don't want each module to have its own separate instance of singleton services. Yet there is a real danger of that happening if the `SharedModule` provides a service. ``` src app shared shared.module.ts init-caps.pipe.ts|spec.ts filter-text.component.ts|spec.ts filter-text.service.ts|spec.ts app.component.ts|html|css|spec.ts app.module.ts app-routing.module.ts main.ts index.html … ``` ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { FilterTextComponent } from './filter-text/filter-text.component'; import { FilterTextService } from './filter-text/filter-text.service'; import { InitCapsPipe } from './init-caps.pipe'; @NgModule({ imports: [CommonModule, FormsModule], declarations: [ FilterTextComponent, InitCapsPipe ], providers: [FilterTextService], exports: [ CommonModule, FormsModule, FilterTextComponent, InitCapsPipe ] }) export class SharedModule { } ``` ``` import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'initCaps' }) export class InitCapsPipe implements PipeTransform { transform = (value: string) => value; } ``` ``` import { Component, EventEmitter, Output } from '@angular/core'; @Component({ selector: 'toh-filter-text', template: '<input type="text" id="filterText" [(ngModel)]="filter" (keyup)="filterChanged($event)" />' }) export class FilterTextComponent { @Output() changed: EventEmitter<string>; filter = ''; constructor() { this.changed = new EventEmitter<string>(); } clear() { this.filter = ''; } filterChanged(event: any) { event.preventDefault(); console.log(`Filter Changed: ${this.filter}`); this.changed.emit(this.filter); } } ``` ``` import { Injectable } from '@angular/core'; @Injectable() export class FilterTextService { constructor() { console.log('Created an instance of FilterTextService'); } filter(data: string, props: Array<string>, originalList: Array<any>) { let filteredList: any[]; if (data && props && originalList) { data = data.toLowerCase(); const filtered = originalList.filter(item => { let match = false; for (const prop of props) { if (item[prop].toString().toLowerCase().indexOf(data) > -1) { match = true; break; } } return match; }); filteredList = filtered; } else { filteredList = originalList; } return filteredList; } } ``` ``` import { Component } from '@angular/core'; import { FilterTextService } from '../shared/filter-text/filter-text.service'; @Component({ selector: 'toh-heroes', templateUrl: './heroes.component.html' }) export class HeroesComponent { heroes = [ { id: 1, name: 'Windstorm' }, { id: 2, name: 'Bombasto' }, { id: 3, name: 'Magneta' }, { id: 4, name: 'Tornado' } ]; filteredHeroes = this.heroes; constructor(private filterService: FilterTextService) { } filterChanged(searchText: string) { this.filteredHeroes = this.filterService.filter(searchText, ['id', 'name'], this.heroes); } } ``` ``` <div>This is heroes component</div> <ul> <li *ngFor="let hero of filteredHeroes"> {{hero.name}} </li> </ul> <toh-filter-text (changed)="filterChanged($event)"></toh-filter-text> ``` [Back to top](styleguide#toc) ### Lazy Loaded folders #### Style 04-11 A distinct application feature or workflow may be *lazy loaded* or *loaded on demand* rather than when the application starts. **Do** put the contents of lazy loaded features in a *lazy loaded folder*. A typical *lazy loaded folder* contains a *routing component*, its child components, and their related assets and modules. **Why**? The folder makes it easy to identify and isolate the feature content. [Back to top](styleguide#toc) ### Never directly import lazy loaded folders #### Style 04-12 **Avoid** allowing modules in sibling and parent folders to directly import a module in a *lazy loaded feature*. **Why**? Directly importing and using a module will load it immediately when the intention is to load it on demand. [Back to top](styleguide#toc) ### Do not add filtering and sorting logic to pipes #### Style 04-13 **Avoid** adding filtering or sorting logic into custom pipes. **Do** pre-compute the filtering and sorting logic in components or services before binding the model in templates. **Why**? Filtering and especially sorting are expensive operations. As Angular can call pipe methods many times per second, sorting and filtering operations can degrade the user experience severely for even moderately-sized lists. [Back to top](styleguide#toc) Components ---------- ### Components as elements #### Style 05-03 **Consider** giving components an *element* selector, as opposed to *attribute* or *class* selectors. **Why**? Components have templates containing HTML and optional Angular template syntax. They display content. Developers place components on the page as they would native HTML elements and web components. **Why**? It is easier to recognize that a symbol is a component by looking at the template's html. > There are a few cases where you give a component an attribute, such as when you want to augment a built-in element. For example, [Material Design](https://material.angular.io/components/button/overview) uses this technique with `<button mat-button>`. However, you wouldn't use this technique on a custom element. > > ``` /* avoid */ @Component({ selector: '[tohHeroButton]', templateUrl: './hero-button.component.html' }) export class HeroButtonComponent {} ``` ``` <!-- avoid --> <div tohHeroButton></div> ``` ``` @Component({ selector: 'toh-hero-button', templateUrl: './hero-button.component.html' }) export class HeroButtonComponent {} ``` ``` <toh-hero-button></toh-hero-button> ``` [Back to top](styleguide#toc) ### Extract templates and styles to their own files #### Style 05-04 **Do** extract templates and styles into a separate file, when more than 3 lines. **Do** name the template file `[component-name].component.html`, where [component-name] is the component name. **Do** name the style file `[component-name].component.css`, where [component-name] is the component name. **Do** specify *component-relative* URLs, prefixed with `./`. **Why**? Large, inline templates and styles obscure the component's purpose and implementation, reducing readability and maintainability. **Why**? In most editors, syntax hints and code snippets aren't available when developing inline templates and styles. The Angular TypeScript Language Service (forthcoming) promises to overcome this deficiency for HTML templates in those editors that support it; it won't help with CSS styles. **Why**? A *component relative* URL requires no change when you move the component files, as long as the files stay together. **Why**? The `./` prefix is standard syntax for relative URLs; don't depend on Angular's current ability to do without that prefix. ``` /* avoid */ @Component({ selector: 'toh-heroes', template: ` <div> <h2>My Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes | async" (click)="selectedHero=hero"> <span class="badge">{{hero.id}}</span> {{hero.name}} </li> </ul> <div *ngIf="selectedHero"> <h2>{{selectedHero.name | uppercase}} is my hero</h2> </div> </div> `, styles: [` .heroes { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .heroes li { cursor: pointer; position: relative; left: 0; background-color: #EEE; margin: .5em; padding: .3em 0; height: 1.6em; border-radius: 4px; } .heroes .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #607D8B; line-height: 1em; position: relative; left: -1px; top: -4px; height: 1.8em; margin-right: .8em; border-radius: 4px 0 0 4px; } `] }) export class HeroesComponent { heroes: Observable<Hero[]>; selectedHero!: Hero; constructor(private heroService: HeroService) { this.heroes = this.heroService.getHeroes(); } } ``` ``` @Component({ selector: 'toh-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent { heroes: Observable<Hero[]>; selectedHero!: Hero; constructor(private heroService: HeroService) { this.heroes = this.heroService.getHeroes(); } } ``` ``` <div> <h2>My Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes | async"> <button type="button" (click)="selectedHero=hero"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> </ul> <div *ngIf="selectedHero"> <h2>{{selectedHero.name | uppercase}} is my hero</h2> </div> </div> ``` ``` .heroes { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .heroes li { display: flex; } .heroes button { flex: 1; cursor: pointer; position: relative; left: 0; background-color: #EEE; margin: .5em; padding: 0; border-radius: 4px; display: flex; align-items: stretch; height: 1.8em; } .heroes button:hover { color: #2c3a41; background-color: #e6e6e6; left: .1em; } .heroes button:active { background-color: #525252; color: #fafafa; } .heroes button.selected { background-color: black; color: white; } .heroes button.selected:hover { background-color: #505050; color: white; } .heroes button.selected:active { background-color: black; color: white; } .heroes .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #405061; line-height: 1em; margin-right: .8em; border-radius: 4px 0 0 4px; } .heroes .name { align-self: center; } ``` [Back to top](styleguide#toc) ### Decorate `input` and `output` properties #### Style 05-12 **Do** use the `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` class decorators instead of the `inputs` and `outputs` properties of the `@[Directive](../api/core/directive)` and `@[Component](../api/core/component)` metadata: **Consider** placing `@[Input](../api/core/input)()` or `@[Output](../api/core/output)()` on the same line as the property it decorates. **Why**? It is easier and more readable to identify which properties in a class are inputs or outputs. **Why**? If you ever need to rename the property or event name associated with `@[Input](../api/core/input)()` or `@[Output](../api/core/output)()`, you can modify it in a single place. **Why**? The metadata declaration attached to the directive is shorter and thus more readable. **Why**? Placing the decorator on the same line *usually* makes for shorter code and still easily identifies the property as an input or output. Put it on the line above when doing so is clearly more readable. ``` /* avoid */ @Component({ selector: 'toh-hero-button', template: `<button type="button"></button>`, inputs: [ 'label' ], outputs: [ 'heroChange' ] }) export class HeroButtonComponent { heroChange = new EventEmitter<any>(); label: string; } ``` ``` @Component({ selector: 'toh-hero-button', template: `<button type="button">{{label}}</button>` }) export class HeroButtonComponent { @Output() heroChange = new EventEmitter<any>(); @Input() label = ''; } ``` [Back to top](styleguide#toc) ### Avoid aliasing `inputs` and `outputs` #### Style 05-13 **Avoid** `input` and `output` aliases except when it serves an important purpose. **Why**? Two names for the same property (one private, one public) is inherently confusing. **Why**? You should use an alias when the directive name is also an `input` property, and the directive name doesn't describe the property. ``` /* avoid pointless aliasing */ @Component({ selector: 'toh-hero-button', template: `<button type="button">{{label}}</button>` }) export class HeroButtonComponent { // Pointless aliases @Output('heroChangeEvent') heroChange = new EventEmitter<any>(); @Input('labelAttribute') label: string; } ``` ``` <!-- avoid --> <toh-hero-button labelAttribute="OK" (changeEvent)="doSomething()"> </toh-hero-button> ``` ``` @Component({ selector: 'toh-hero-button', template: `<button type="button" >{{label}}</button>` }) export class HeroButtonComponent { // No aliases @Output() heroChange = new EventEmitter<any>(); @Input() label = ''; } ``` ``` import { Directive, ElementRef, Input, OnChanges } from '@angular/core'; @Directive({ selector: '[heroHighlight]' }) export class HeroHighlightDirective implements OnChanges { // Aliased because `color` is a better property name than `heroHighlight` @Input('heroHighlight') color = ''; constructor(private el: ElementRef) {} ngOnChanges() { this.el.nativeElement.style.backgroundColor = this.color || 'yellow'; } } ``` ``` <toh-hero-button label="OK" (change)="doSomething()"> </toh-hero-button> <!-- `heroHighlight` is both the directive name and the data-bound aliased property name --> <h3 heroHighlight="skyblue">The Great Bombasto</h3> ``` [Back to top](styleguide#toc) ### Member sequence #### Style 05-14 **Do** place properties up top followed by methods. **Do** place private members after public members, alphabetized. **Why**? Placing members in a consistent sequence makes it easy to read and helps instantly identify which members of the component serve which purpose. ``` /* avoid */ export class ToastComponent implements OnInit { private defaults = { title: '', message: 'May the Force be with you' }; message: string; title: string; private toastElement: any; ngOnInit() { this.toastElement = document.getElementById('toh-toast'); } // private methods private hide() { this.toastElement.style.opacity = 0; window.setTimeout(() => this.toastElement.style.zIndex = 0, 400); } activate(message = this.defaults.message, title = this.defaults.title) { this.title = title; this.message = message; this.show(); } private show() { console.log(this.message); this.toastElement.style.opacity = 1; this.toastElement.style.zIndex = 9999; window.setTimeout(() => this.hide(), 2500); } } ``` ``` export class ToastComponent implements OnInit { // public properties message = ''; title = ''; // private fields private defaults = { title: '', message: 'May the Force be with you' }; private toastElement: any; // public methods activate(message = this.defaults.message, title = this.defaults.title) { this.title = title; this.message = message; this.show(); } ngOnInit() { this.toastElement = document.getElementById('toh-toast'); } // private methods private hide() { this.toastElement.style.opacity = 0; window.setTimeout(() => this.toastElement.style.zIndex = 0, 400); } private show() { console.log(this.message); this.toastElement.style.opacity = 1; this.toastElement.style.zIndex = 9999; window.setTimeout(() => this.hide(), 2500); } } ``` [Back to top](styleguide#toc) ### Delegate complex component logic to services #### Style 05-15 **Do** limit logic in a component to only that required for the view. All other logic should be delegated to services. **Do** move reusable logic to services and keep components simple and focused on their intended purpose. **Why**? Logic may be reused by multiple components when placed within a service and exposed as a function. **Why**? Logic in a service can more easily be isolated in a unit test, while the calling logic in the component can be easily mocked. **Why**? Removes dependencies and hides implementation details from the component. **Why**? Keeps the component slim, trim, and focused. ``` /* avoid */ import { OnInit } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Observable } from 'rxjs'; import { catchError, finalize } from 'rxjs/operators'; import { Hero } from '../shared/hero.model'; const heroesUrl = 'http://angular.io'; export class HeroListComponent implements OnInit { heroes: Hero[]; constructor(private http: HttpClient) {} getHeroes() { this.heroes = []; this.http.get(heroesUrl).pipe( catchError(this.catchBadResponse), finalize(() => this.hideSpinner()) ).subscribe((heroes: Hero[]) => this.heroes = heroes); } ngOnInit() { this.getHeroes(); } private catchBadResponse(err: any, source: Observable<any>) { // log and handle the exception return new Observable(); } private hideSpinner() { // hide the spinner } } ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero, HeroService } from '../shared'; @Component({ selector: 'toh-hero-list', template: `...` }) export class HeroListComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) {} getHeroes() { this.heroes = []; this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes); } ngOnInit() { this.getHeroes(); } } ``` [Back to top](styleguide#toc) ### Don't prefix `output` properties #### Style 05-16 **Do** name events without the prefix `on`. **Do** name event handler methods with the prefix `on` followed by the event name. **Why**? This is consistent with built-in events such as button clicks. **Why**? Angular allows for an [alternative syntax](binding-syntax) `on-*`. If the event itself was prefixed with `on` this would result in an `on-onEvent` binding expression. ``` /* avoid */ @Component({ selector: 'toh-hero', template: `...` }) export class HeroComponent { @Output() onSavedTheDay = new EventEmitter<boolean>(); } ``` ``` <!-- avoid --> <toh-hero (onSavedTheDay)="onSavedTheDay($event)"></toh-hero> ``` ``` export class HeroComponent { @Output() savedTheDay = new EventEmitter<boolean>(); } ``` ``` <toh-hero (savedTheDay)="onSavedTheDay($event)"></toh-hero> ``` [Back to top](styleguide#toc) ### Put presentation logic in the component class #### Style 05-17 **Do** put presentation logic in the component class, and not in the template. **Why**? Logic will be contained in one place (the component class) instead of being spread in two places. **Why**? Keeping the component's presentation logic in the class instead of the template improves testability, maintainability, and reusability. ``` /* avoid */ @Component({ selector: 'toh-hero-list', template: ` <section> Our list of heroes: <toh-hero *ngFor="let hero of heroes" [hero]="hero"> </toh-hero> Total powers: {{totalPowers}}<br> Average power: {{totalPowers / heroes.length}} </section> ` }) export class HeroListComponent { heroes: Hero[]; totalPowers: number; } ``` ``` @Component({ selector: 'toh-hero-list', template: ` <section> Our list of heroes: <toh-hero *ngFor="let hero of heroes" [hero]="hero"> </toh-hero> Total powers: {{totalPowers}}<br> Average power: {{avgPower}} </section> ` }) export class HeroListComponent { heroes: Hero[]; totalPowers = 0; get avgPower() { return this.totalPowers / this.heroes.length; } } ``` [Back to top](styleguide#toc) ### Initialize inputs #### Style 05-18 TypeScript's `--strictPropertyInitialization` compiler option ensures that a class initializes its properties during construction. When enabled, this option causes the TypeScript compiler to report an error if the class does not set a value to any property that is not explicitly marked as optional. By design, Angular treats all `@[Input](../api/core/input)` properties as optional. When possible, you should satisfy `--strictPropertyInitialization` by providing a default value. ``` @Component({ selector: 'toh-hero', template: `...` }) export class HeroComponent { @Input() id = 'default_id'; } ``` If the property is hard to construct a default value for, use `?` to explicitly mark the property as optional. ``` @Component({ selector: 'toh-hero', template: `...` }) export class HeroComponent { @Input() id?: string; process() { if (this.id) { // ... } } } ``` You may want to have a required `@[Input](../api/core/input)` field, meaning all your component users are required to pass that attribute. In such cases, use a default value. Just suppressing the TypeScript error with `!` is insufficient and should be avoided because it will prevent the type checker ensure the input value is provided. ``` @Component({ selector: 'toh-hero', template: `...` }) export class HeroComponent { // The exclamation mark suppresses errors that a property is // not initialized. // Ignoring this enforcement can prevent the type checker // from finding potential issues. @Input() id!: string; } ``` Directives ---------- ### Use directives to enhance an element #### Style 06-01 **Do** use attribute directives when you have presentation logic without a template. **Why**? Attribute directives don't have an associated template. **Why**? An element may have more than one attribute directive applied. ``` @Directive({ selector: '[tohHighlight]' }) export class HighlightDirective { @HostListener('mouseover') onMouseEnter() { // do highlight work } } ``` ``` <div tohHighlight>Bombasta</div> ``` [Back to top](styleguide#toc) ### `[HostListener](../api/core/hostlistener)`/`[HostBinding](../api/core/hostbinding)` decorators versus `host` metadata #### Style 06-03 **Consider** preferring the `@[HostListener](../api/core/hostlistener)` and `@[HostBinding](../api/core/hostbinding)` to the `host` property of the `@[Directive](../api/core/directive)` and `@[Component](../api/core/component)` decorators. **Do** be consistent in your choice. **Why**? The property associated with `@[HostBinding](../api/core/hostbinding)` or the method associated with `@[HostListener](../api/core/hostlistener)` can be modified only in a single place —in the directive's class. If you use the `host` metadata property, you must modify both the property/method declaration in the directive's class and the metadata in the decorator associated with the directive. ``` import { Directive, HostBinding, HostListener } from '@angular/core'; @Directive({ selector: '[tohValidator]' }) export class ValidatorDirective { @HostBinding('attr.role') role = 'button'; @HostListener('mouseenter') onMouseEnter() { // do work } } ``` Compare with the less preferred `host` metadata alternative. **Why**? The `host` metadata is only one term to remember and doesn't require extra ES imports. ``` import { Directive } from '@angular/core'; @Directive({ selector: '[tohValidator2]', host: { '[attr.role]': 'role', '(mouseenter)': 'onMouseEnter()' } }) export class Validator2Directive { role = 'button'; onMouseEnter() { // do work } } ``` [Back to top](styleguide#toc) Services -------- ### Services are singletons #### Style 07-01 **Do** use services as singletons within the same injector. Use them for sharing data and functionality. **Why**? Services are ideal for sharing methods across a feature area or an app. **Why**? Services are ideal for sharing stateful in-memory data. ``` export class HeroService { constructor(private http: HttpClient) { } getHeroes() { return this.http.get<Hero[]>('api/heroes'); } } ``` [Back to top](styleguide#toc) ### Single responsibility #### Style 07-02 **Do** create services with a single responsibility that is encapsulated by its context. **Do** create a new service once the service begins to exceed that singular purpose. **Why**? When a service has multiple responsibilities, it becomes difficult to test. **Why**? When a service has multiple responsibilities, every component or service that injects it now carries the weight of them all. [Back to top](styleguide#toc) ### Providing a service #### Style 07-03 **Do** provide a service with the application root injector in the `@[Injectable](../api/core/injectable)` decorator of the service. **Why**? The Angular injector is hierarchical. **Why**? When you provide the service to a root injector, that instance of the service is shared and available in every class that needs the service. This is ideal when a service is sharing methods or state. **Why**? When you register a service in the `@[Injectable](../api/core/injectable)` decorator of the service, optimization tools such as those used by the [Angular CLI's](cli) production builds can perform tree shaking and remove services that aren't used by your app. **Why**? This is not ideal when two different components need different instances of a service. In this scenario it would be better to provide the service at the component level that needs the new and separate instance. ``` @Injectable({ providedIn: 'root', }) export class Service { } ``` [Back to top](styleguide#toc) ### Use the @Injectable() class decorator #### Style 07-04 **Do** use the `@[Injectable](../api/core/injectable)()` class decorator instead of the `@[Inject](../api/core/inject)` parameter decorator when using types as tokens for the dependencies of a service. **Why**? The Angular Dependency Injection (DI) mechanism resolves a service's own dependencies based on the declared types of that service's constructor parameters. **Why**? When a service accepts only dependencies associated with type tokens, the `@[Injectable](../api/core/injectable)()` syntax is much less verbose compared to using `@[Inject](../api/core/inject)()` on each individual constructor parameter. ``` /* avoid */ export class HeroArena { constructor( @Inject(HeroService) private heroService: HeroService, @Inject(HttpClient) private http: HttpClient) {} } ``` ``` @Injectable() export class HeroArena { constructor( private heroService: HeroService, private http: HttpClient) {} } ``` [Back to top](styleguide#toc) Data Services ------------- ### Talk to the server through a service #### Style 08-01 **Do** refactor logic for making data operations and interacting with data to a service. **Do** make data services responsible for XHR calls, local storage, stashing in memory, or any other data operations. **Why**? The component's responsibility is for the presentation and gathering of information for the view. It should not care how it gets the data, just that it knows who to ask for it. Separating the data services moves the logic on how to get it to the data service, and lets the component be simpler and more focused on the view. **Why**? This makes it easier to test (mock or real) the data calls when testing a component that uses a data service. **Why**? The details of data management, such as headers, HTTP methods, caching, error handling, and retry logic, are irrelevant to components and other data consumers. A data service encapsulates these details. It's easier to evolve these details inside the service without affecting its consumers. And it's easier to test the consumers with mock service implementations. [Back to top](styleguide#toc) Lifecycle hooks --------------- Use Lifecycle hooks to tap into important events exposed by Angular. [Back to top](styleguide#toc) ### Implement lifecycle hook interfaces #### Style 09-01 **Do** implement the lifecycle hook interfaces. **Why**? Lifecycle interfaces prescribe typed method signatures. Use those signatures to flag spelling and syntax mistakes. ``` /* avoid */ @Component({ selector: 'toh-hero-button', template: `<button type="button">OK</button>` }) export class HeroButtonComponent { onInit() { // misspelled console.log('The component is initialized'); } } ``` ``` @Component({ selector: 'toh-hero-button', template: `<button type="button">OK</button>` }) export class HeroButtonComponent implements OnInit { ngOnInit() { console.log('The component is initialized'); } } ``` [Back to top](styleguide#toc) Appendix -------- Useful tools and tips for Angular. [Back to top](styleguide#toc) ### File templates and snippets #### Style A-02 **Do** use file templates or snippets to help follow consistent styles and patterns. Here are templates and/or snippets for some of the web development editors and IDEs. **Consider** using [snippets](https://marketplace.visualstudio.com/items?itemName=johnpapa.Angular2) for [Visual Studio Code](https://code.visualstudio.com) that follow these styles and guidelines. [![Use Extension](https://angular.io/generated/images/guide/styleguide/use-extension.gif)](https://marketplace.visualstudio.com/items?itemName=johnpapa.Angular2) **Consider** using [snippets](https://atom.io/packages/angular-2-typescript-snippets) for [Atom](https://atom.io) that follow these styles and guidelines. **Consider** using [snippets](https://github.com/orizens/sublime-angular2-snippets) for [Sublime Text](https://www.sublimetext.com) that follow these styles and guidelines. **Consider** using [snippets](https://github.com/mhartington/vim-angular2-snippets) for [Vim](https://www.vim.org) that follow these styles and guidelines. [Back to top](styleguide#toc) Last reviewed on Mon Feb 28 2022
programming_docs
angular Open a documentation pull request Open a documentation pull request ================================= This topic describes how to open the pull request that requests your documentation update to be added to the `angular/angular` repo. These steps are performed in your web browser. 1. Locate the `working` branch that you want to use for your pull request. In this example, `test-1` is the name of the `working` branch. Choose one of these options to open a pull request. 1. If you recently pushed the branch that you want to use to the `origin` repo, you might see it listed on the code page of the `angular` repo in your GitHub account. This image shows an example of a repo that has had several recent updates. In the alert message with your `working` branch, click **Compare & pull request** to open a pull request and continue to the next step. 2. You can also select your `working` branch in the code page of the origin repo. Click the link text in the `"This branch is"` message to open the **Comparing changes** page. In the **Comparing changes** page, click **Create pull request** to open the new pull request page. 3. Review and complete the form in the comment field. Most documentation updates require responses to the entries noted by an arrow and described below. 1. **The commit message follows our guidelines** Mark this comment when you're sure your commit messages are in the correct format. Remember that the commit messages and the pull request title are different. For more information about commit message formatting, see [Preparing a documentation update for a pull request](doc-pr-prep) and [Commit message format](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit). 2. **Docs have been added / updated (for bug fixes / features)** Mark this comment to show that documentation has been updated. 3. **Documentation content changes** Mark this comment to identify this is a documentation pull request. If you also updated other types of content, you can mark those as well. 4. **What is the current behavior?** Briefly describe what wasn't working or what was incorrect in the documentation before you made the changes in this pull request. Add the issue number here, if the problem is described in an issue. 5. **What is the new behavior?** Briefly describe what was added to fix the problem. 6. **Does this PR introduce a breaking change?** For most documentation updates, the answer to this should be `No`. 7. **Other information** Add any other information that can help reviewers understand your pull request here. 1. Click the arrow next to **Draft pull request** and select whether you want to create a draft pull request or a pull request. 1. A draft pull request runs the continuous integration (CI) testing, but does not send the pull request to reviewers. You can ask people to review it by sending them the pull request link. You might use this option to see how your pull request passes the CI testing before you send it for review to be merged. Draft pull requests cannot be merged. 2. A pull request runs the continuous integration (CI) testing and sends your pull request to reviewers to review and merge. > **NOTE**: You can change draft pull requests to pull requests. > > 2. Click **Create the pull request** or **Draft pull request** to open the pull request. After GitHub creates the pull request, the browser opens the new pull request page. 3. After you open the pull request, the automated tests start running. What happens after you open a pull request ------------------------------------------ In most cases, documentation pull requests that pass the automated tests are approved within a few days. Sometimes, reviewers suggest changes for you to make to improve your pull request. In those case, review the suggestions and [update the pull request](doc-pr-update) with a comment or an updated file. ### What happens to abandoned pull requests While it can take a few days to respond to comments, try to respond as quickly as you can. Pull requests that appear to abandoned or ignored are closed according to this schedule: * After 14 days of inactivity after the last comment, the author is reminded that the pull request has pending comments * After 28 days of inactivity after the last comment, the pull request is closed and not merged Last reviewed on Wed Oct 12 2022 angular Select a documentation issue Select a documentation issue ============================ This topic describes how to select an Angular documentation issue to fix. Angular documentation issues are stored in the **Issues** tab of the [angular/angular](https://github.com/angular/angular) repo. Documentation issues can be identified by the `comp: docs` label and they are labeled by priority. You are welcome to work on [any issue](doc-select-issue#links-to-documentation-issues) that someone else isn't already working on. If you know of a problem in the documentation that hasn't been reported, you can also [create a new documentation issue](https://github.com/angular/angular/issues/new?assignees=&labels=&template=3-docs-bug.yaml). Some things to consider when choosing an issue to fix include: * Fixing higher priority issues is more valuable to the community. * If you're new to open source software, a lower priority issue or a `good first issue` would be a good place to start. * Every contribution helps improve the documentation. After you select an issue to resolve: 1. In the issue page, add `working on fix` as a comment to let others know that you are working on it. 2. Continue to [Starting to edit a documentation topic](doc-update-start). | Links to documentation issues | | --- | | [All open documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22) | | [All open and unassigned documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22+no%3Aassignee+-label%3A%22state%3A+has+PR%22) | | [Unassigned good first documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22+label%3A%22good+first+issue%22+no%3Aassignee+-label%3A%22state%3A+has+PR%22) | | [Unassigned priority 1 documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22+label%3Ap1+no%3Aassignee+-label%3A%22state%3A+has+PR%22) | | [Unassigned priority 2 documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22+label%3Ap2+no%3Aassignee+-label%3A%22state%3A+has+PR%22) | | [Unassigned priority 3 documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22+label%3Ap3+no%3Aassignee+-label%3A%22state%3A+has+PR%22) | | [Unassigned priority 4 documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22+label%3Ap4+no%3Aassignee+-label%3A%22state%3A+has+PR%22) | | [Unassigned priority 5 documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22+label%3Ap5+no%3Aassignee+-label%3A%22state%3A+has+PR%22) | @reviewed 2022-10-12 angular JavaScript modules vs. NgModules JavaScript modules vs. NgModules ================================ JavaScript modules and NgModules can help you modularize your code, but they are very different. Angular applications rely on both kinds of modules. JavaScript modules: Files containing code ----------------------------------------- A [JavaScript module](https://javascript.info/modules "JavaScript.Info - Modules") is an individual file with JavaScript code, usually containing a class or a library of functions for a specific purpose within your application. JavaScript modules let you spread your work across multiple files. > To learn more about JavaScript modules, see [ES6 In Depth: Modules](https://hacks.mozilla.org/2015/08/es6-in-depth-modules). For the module specification, see the [6th Edition of the ECMAScript standard](https://www.ecma-international.org/ecma-262/6.0/#sec-modules). > > To make the code in a JavaScript module available to other modules, use an `export` statement at the end of the relevant code in the module, such as the following: ``` export class AppComponent { … } ``` When you need that module's code in another module, use an `import` statement as follows: ``` import { AppComponent } from './app.component'; ``` Each module has its own top-level scope. In other words, top-level variables and functions in a module are not seen in other scripts or modules. Each module provides a namespace for identifiers to prevent them from clashing with identifiers in other modules. With multiple modules, you can prevent accidental global variables by creating a single global namespace and adding submodules to it. The Angular framework itself is loaded as a set of JavaScript modules. NgModules: Classes with metadata for compiling ---------------------------------------------- An [NgModule](glossary#ngmodule "Definition of NgModule") is a class marked by the `@[NgModule](../api/core/ngmodule)` decorator with a metadata object that describes how that particular part of the application fits together with the other parts. NgModules are specific to Angular. While classes with an `@[NgModule](../api/core/ngmodule)` decorator are by convention kept in their own files, they differ from JavaScript modules because they include this metadata. The `@[NgModule](../api/core/ngmodule)` metadata plays an important role in guiding the Angular compilation process that converts the application code you write into highly performant JavaScript code. The metadata describes how to compile a component's template and how to create an [injector](glossary#injector "Definition of injector") at runtime. It identifies the NgModule's [components](glossary#component "Definition of component"), [directives](glossary#directive "Definition of directive"), and [pipes](glossary#pipe "Definition of pipe)"), and makes some of them public through the `exports` property so that external components can use them. You can also use an NgModule to add [providers](glossary#provider "Definition of provider") for [services](glossary#service "Definition of a service"), so that the services are available elsewhere in your application. Rather than defining all member classes in one giant file as a JavaScript module, declare which components, directives, and pipes belong to the NgModule in the `@[NgModule.declarations](../api/core/ngmodule#declarations)` list. These classes are called [declarables](glossary#declarable "Definition of a declarable"). An NgModule can export only the declarable classes it owns or imports from other NgModules. It doesn't declare or export any other kind of class. Declarables are the only classes that matter to the Angular compilation process. For a complete description of the NgModule metadata properties, see [Using the NgModule metadata](ngmodule-api "Using the NgModule metadata"). An example that uses both ------------------------- The root NgModule `AppModule` generated by the [Angular CLI](cli) for a new application project demonstrates how you use both kinds of modules: ``` // imports import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; // @NgModule decorator with its metadata @NgModule({ declarations: [AppComponent], imports: [BrowserModule], providers: [], bootstrap: [AppComponent] }) export class AppModule {} ``` The root NgModule starts with `import` statements to import JavaScript modules. It then configures the `@[NgModule](../api/core/ngmodule)` with the following arrays: * `declarations`: The components, directives, and pipes that belong to the NgModule. A new application project's root NgModule has only one component, called `AppComponent`. * `imports`: Other NgModules you are using, so that you can use their declarables. The newly generated root NgModule imports [`BrowserModule`](../api/platform-browser/browsermodule "BrowserModule NgModule") in order to use browser-specific services such as [DOM](https://www.w3.org/TR/DOM-Level-2-Core/introduction.html "Definition of Document Object Model") rendering, sanitization, and location. * `providers`: Providers of services that components in other NgModules can use. There are no providers in a newly generated root NgModule. * `bootstrap`: The [entry component](entry-components "Specifying an entry component") that Angular creates and inserts into the `index.html` host web page, thereby bootstrapping the application. This entry component, `AppComponent`, appears in both the `declarations` and the `bootstrap` arrays. Next steps ---------- * For more about NgModules, see [Organizing your app with NgModules](ngmodules "Organizing your app with NgModules"). * To learn more about the root NgModule, see [Launching an app with a root NgModule](bootstrapping "Launching an app with a root NgModule"). * To learn about frequently used Angular NgModules and how to import them into your app, see [Frequently-used modules](frequent-ngmodules "Frequently-used modules"). Last reviewed on Mon Feb 28 2022 angular Prepare a documentation update for a pull request Prepare a documentation update for a pull request ================================================= This topic describes how to prepare your update to the Angular documentation so that you can open a pull request. A pull request is how you share your update in a way that allows it to be merged it into the `angular/angular` repo. > **IMPORTANT**: Make sure that you have reviewed your documentation update, removed any lint errors, and confirmed that it passes the end-to-end (e2e) tests without errors. > > A pull request shares a branch in `personal/angular`, your forked repo, with the `angular/angular` repo. After your pull request is approved and merged, the new commits from your branch are added to the main branch in the `angular/angular` repo. The commits in your branch, and their messages, become part of the `angular/angular` repo. What does this mean for your pull request? 1. Your commit messages become part of the documentation of the changes made to Angular. Because they become part of the `angular/angular` repo, they must conform to a specific format so that they are easy to read. If they aren't correctly formatted, you can fix that before you open your pull request. 2. You might need to squash the commits that you made while developing your update. It's normal to save your changes as intermediate commits while you're developing a large update, but your pull request represents only one change to the `angular/angular` repo. Squashing the commits from your working branch into fewer, or just one commit, makes the commits in your pull request match the changes your update makes to the `angular/angular` repo. Format commit messages for a pull request ----------------------------------------- Commits merged to `angular/angular` must have messages that are correctly formatted. This section describes how to correctly format commit messages. Remember that the commit message is different from the pull request comment. ### Single line commit messages The simplest commit message is a single-line of text. All commit messages in a pull request that updates documentation must begin with `docs:` and be followed by a short description of the change. The following is an example a valid Angular commit message. ``` docs: a short summary in present tense without capitalization or ending period ``` This is an example of a commit command with the single-line commit message from the previous example. ``` git commit -m "docs: a short summary in present tense without capitalization or ending period" ``` ### Multi-line commit messages You can include more information by providing a more detailed, multi-line message. The detailed body text of the message must be separated by a blank line after the summary. The footer that lists the issue the commit fixes must also be separated from the body text by a blank line. ``` docs: a short summary in present tense without capitalization or ending period A description of what was fixed, and why. This description can be as detailed as necessary and can be written with appropriate capitalization and punctuation Fixes #34353 ``` This is an example of a commit command with a multi-line commit message from the previous example. ``` git commit -m "docs: a short summary in present tense without capitalization or ending period A description of what was fixed, and why. This description can be as detailed as necessary and can be written with appropriate capitalization and punctuation. Fixes #34353" ``` This example is for documentation updates only. For the complete specification of Angular commit messages, see [Commit message format](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit). ### Change a commit message If the last commit you made has a message that isn't in the correct format, you can update the message. Changing the message of an earlier commit or of multiple commits is also possible, but requires a more complex procedure. Run this command to change the commit message of the most recent commit. The new commit message is formatted as described in the previous procedures. ``` git commit --amend -m "New commit message" ``` This command creates a new commit on your local computer that replaces the previous commit. You must push this new commit before you open your pull request. If you pushed the original commit to the repo in your GitHub account, run this command to force-push the commit with the new message. ``` git push --force-with-lease ``` If you haven't pushed the commit you amended, you can run `git push` with no parameters to push your updated commit. Prepare your branch for a pull request -------------------------------------- When you created your working branch to update the documentation, you branched off the `main` branch. Your changes in the working branch were based on the state of the `main` branch at that time you created the branch. Since you created your working branch, it's quite likely that the `main` branch has been updated. To make sure that your updates work with the current `main` branch, you should `rebase` your working branch to catch it up to what is current. You might also need to squash the commits you made in your working branch to combine them for the pull request. ### Rebase your working branch Rebasing your working branch changes the starting point of your commits from where the `main` branch was when you started to where it is now. Before you can rebase your working branch, you must update both your *clone* and your *fork* of the upstream repo. #### Why you rebase your working branch Rebasing your working branch to the current state of the `main` branch eliminates conflicts before your working branch is merged back into `main`. By rebasing your working branch, the commits in your working branch show only those changes that you made to fix the issue. If you don't rebase your working branch, it can have merge commits. Merge commits are commits that `git` creates to make up for the changes in the `main` branch since the `working` branch was created. Merge commits aren't harmful, but they can complicate a future review of the changes. The following illustrates the rebase process. This image shows a `working` branch created from commit 5 of the `main` branch and then updated twice. The numbered circles in these diagrams represent commits. This image shows the `main` branch after it was updated twice as the `working` branch was updated. If the working branch was merged, a merge commit would be needed. This image illustrates the result. To make it easy for future contributors, the Angular team tries to keep the commit log as a linear sequence of changes. Incorporating merge commits includes changes that are the result of the merge along with what the author or developer changed. This makes it harder for future developers and authors to tell how the content evolved. To create a linear sequence of changes, you might need to update your `working` branch and update your changes. To add your updates to the current state of the `main` branch and prevent a merge commit, you rebase the `working` branch. Rebasing is how `git` updates your working branch to make it look like you created it from commit `9`. To do this, it updates the commits in the `working` branch. After rebasing the `working` branch, its commits now start from the last commit of the `main` branch. This image shows the rebased `working` branch with is updated commits. When the rebased `working` branch is merged to main, its commits can now be appended to the `main` branch with no extra merge commits. This image shows the linear, `main` branch after merging the updated and rebased `working` branch. #### To update your fork of the upstream repo You want to sync the `main` branch of your origin repo with the `main` branch of the upstream `angular/angular` before you open a pull request. This procedure updates your origin repo, the `personal/angular` repo, on your local computer so it has the current code, as illustrated here. The circled numbers correspond to procedure steps. The last step of this procedure then pushes the update to the fork of the `angular` in your GitHub account. Perform these steps from a command-line tool on your local computer. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to check out the `main` branch. ``` git checkout main ``` 3. Update the `main` branch in the `working` directory on your local computer from the upstream `angular/angular` repo. ``` git fetch upstream git merge upstream/main ``` 4. Update your `personal/angular` repo on `github.com` with the latest from the upstream `angular/angular` repo. ``` git push ``` The `main` branch on your local computer and your origin repo on `github.com` are now in sync. They have been updated with any changes to the upstream `angular/angular` repo that were made since the last time you updated your fork. #### To rebase your working branch Perform these steps from a command-line tool on your local computer. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to check out your `working` branch. Replace `working-branch` with the name of your `working` branch. ``` git checkout working-branch ``` 3. Run this command to rebase your branch to add the commits from your `working` branch to the current content in the `main` branch. ``` git rebase main ``` 4. Run this command to update your `working` branch in the repo in your GitHub account. ``` git push --force-with-lease ``` ### Review the commits in your working branch After you rebase your `working` branch, your commits should be after those of the current `main` branch. #### To review the commits that you've added to the `working` branch Perform these steps from a command-line tool on your local computer. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to confirm that you are using the correct `working` branch. If you aren't in the correct branch, replace `working-branch` with the name of your `working` branch and run `git checkout working-branch` to select the correct branch. ``` git status ``` 3. Review the message from the previous `git status` command. If you aren't in the correct branch, replace `working-branch` with the name of your `working` branch and run `git checkout working-branch` to select the correct branch. 4. Run this command to get a list of the commits in your `working` branch. ``` git log --pretty=format:"%h %as %an %Cblue%s %Cgreen%D" ``` This command returns the log of commits in the `working` branch with the most recent commit at the top of the list. 5. In the output of the previous `git log` command, find the entry that contains `upstream/main`. It should be near the top of the list. 1. **Confirm that the entry that contains `upstream/main` also contains `origin/main` and `main`** If it doesn't, you must resync your clone and your fork of `angular/angular`, and then rebase the branch before you continue. 2. **Confirm that all commits for your update are after the entry that contains `upstream/main`** Remember that the log output is displayed with the most recent commit first. Your commits should all be on top of the entry that contains `upstream/main` in the log output. If any of your commits are listed after the entry that contains `upstream/main`, somehow your commits in the `working` branch got mixed up. You must fix the branch before you open a pull request. 3. **Confirm that your commit messages are in the correct format** The commit message format is tested by the automated tests and it must be in the correct format before the pull request can be approved. 4. **Confirm that your commits and their messages reflect the changes your update makes to Angular** If you have more commits than changes, you might need to squash them into fewer commits before your pull request is approved. Next step --------- After you confirm that your updates and your `working` branch are correct, you are ready to [open a pull request](doc-pr-open). Last reviewed on Wed Oct 12 2022
programming_docs
angular Property binding Property binding ================ Property binding in Angular helps you set values for properties of HTML elements or directives. Use property binding to do things such as toggle button features, set paths programmatically, and share values between components. > See the live example for a working example containing the code snippets in this guide. > > Prerequisites ------------- * [Basics of components](architecture-components) * [Basics of templates](glossary#template) * [Binding syntax](binding-syntax) Understanding the flow of data ------------------------------ Property binding moves a value in one direction, from a component's property into a target element property. > For more information on listening for events, see [Event binding](event-binding). > > To read a target element property or call one of its methods, see the API reference for [ViewChild](../api/core/viewchild) and [ContentChild](../api/core/contentchild). Binding to a property --------------------- To bind to an element's property, enclose it in square brackets, `[]`, which identifies the property as a target property. A target property is the DOM property to which you want to assign a value. To assign a value to a target property for the image element's `src` property, type the following code: ``` <img alt="item" [src]="itemImageUrl"> ``` In most cases, the target name is the name of a property, even when it appears to be the name of an attribute. In this example, `src` is the name of the `<[img](../api/common/ngoptimizedimage)>` element property. The brackets, `[]`, cause Angular to evaluate the right-hand side of the assignment as a dynamic expression. Without the brackets, Angular treats the right-hand side as a string literal and sets the property to that static value. To assign a string to a property, type the following code: ``` <app-item-detail childItem="parentItem"></app-item-detail> ``` Omitting the brackets renders the string `parentItem`, not the value of `parentItem`. Setting an element property to a component property value --------------------------------------------------------- To bind the `src` property of an `<[img](../api/common/ngoptimizedimage)>` element to a component's property, place `src` in square brackets followed by an equal sign and then the property. Using the property `itemImageUrl`, type the following code: ``` <img alt="item" [src]="itemImageUrl"> ``` Declare the `itemImageUrl` property in the class, in this case `AppComponent`. ``` itemImageUrl = '../assets/phone.svg'; ``` #### `colspan` and `colSpan` A common point of confusion is between the attribute, `colspan`, and the property, `colSpan`. Notice that these two names differ by only a single letter. To use property binding using `colSpan`, type the following: ``` <!-- Notice the colSpan property is camel case --> <tr><td [colSpan]="1 + 1">Three-Four</td></tr> ``` To disable a button while the component's `isUnchanged` property is `true`, type the following: ``` <!-- Bind button disabled state to `isUnchanged` property --> <button type="button" [disabled]="isUnchanged">Disabled Button</button> ``` To set a property of a directive, type the following: ``` <p [ngClass]="classes">[ngClass] binding to the classes property making this blue</p> ``` To set the model property of a custom component for parent and child components to communicate with each other, type the following: ``` <app-item-detail [childItem]="parentItem"></app-item-detail> ``` Toggling button features ------------------------ To use a Boolean value to disable a button's features, bind the `disabled` DOM attribute to a Boolean property in the class. ``` <!-- Bind button disabled state to `isUnchanged` property --> <button type="button" [disabled]="isUnchanged">Disabled Button</button> ``` Because the value of the property `isUnchanged` is `true` in the `AppComponent`, Angular disables the button. ``` isUnchanged = true; ``` What's next ----------- * [Property binding best practices](property-binding-best-practices) * [Event binding](event-binding) * [Text Interpolation](interpolation) * [Class & Style Binding](class-binding) * [Attribute Binding](attribute-binding) Last reviewed on Thu Apr 14 2022 angular Angular documentation localization guidelines Angular documentation localization guidelines ============================================= One way to contribute to Angular's documentation is to localize it into another language. This topic describes what you need to know to localize Angular and have it listed on our [Localized documentation](localized-documentation) page. Before you begin ---------------- Before you start localizing the Angular documentation, first check to see if a localized version already exists. See [Localized documentation](localized-documentation) for a current list of localized versions of Angular documentation. Getting started --------------- To have a localized version of Angular documentation listed on [angular.io](https://angular.io), you must either: * Be an Angular [Google Developer Expert (GDE)](https://developers.google.com/community/experts) * Have an Angular GDE nominate you for localizing the content Nomination, in this instance, means that the GDE knows who you are and can vouch for your capabilities. An Angular GDE can nominate someone by contacting the Angular team, providing your name, contact information, and the language to which you are localizing. What to localize ---------------- To localize Angular documentation, you must include, at a minimum, the following topics: * [Introduction to the Angular docs](docs) * [What is Angular?](what-is-angular) * [Getting started with Angular](start) + [Adding navigation](start/start-routing) + [Managing data](start/start-data) + [Using forms for user input](start/start-forms) + [Deploying an application](start/start-deployment) + [Setting up the local environment and workspace](setup-local) * [Tour of Heroes app and tutorial](../tutorial/tour-of-heroes) + [Create a new project](../tutorial/tour-of-heroes/toh-pt0) + [The hero editor](../tutorial/tour-of-heroes/toh-pt1) + [Display a selection list](../tutorial/tour-of-heroes/toh-pt2) + [Create a feature component](../tutorial/tour-of-heroes/toh-pt3) + [Add services](../tutorial/tour-of-heroes/toh-pt4) + [Add navigation with routing](../tutorial/tour-of-heroes/toh-pt5) + [Get data from a server](../tutorial/tour-of-heroes/toh-pt6) Because these topics reflect the minimum documentation set for localization, the Angular documentation team takes special precautions when making any changes to these topics. Specifically: * The Angular team carefully assesses any incoming pull requests or issues to determine their impact on localized content. * If the Angular team incorporates changes into these topics, the Angular team will communicate those changes to members of the localization community. See the section, [Communications](localizing-angular#communications), for more information. Hosting ------- Individuals and teams that localize Angular documentation assume responsibility for hosting their localized site. The Angular team does not host localized content. The Angular team is also not responsible for providing domain names. Awareness --------- As part of the localization effort, the Angular documentation team adds localized documentation to the [Localized documentation](localized-documentation) page. This topic lists: * The language of the localized documentation * The URL for the localized documentation The Angular team can remove a link on this page for any reason, including but not limited to: * Inability to contact the individual or team * Issues or complaints about the documentation that go unaddressed Communications -------------- The Angular documentation team uses a Slack channel to communicate with members of the community focused on localization efforts. After receiving a nomination to localize content, an individual or team can contact the Angular team to get access to this Slack channel. The Angular documentation team may also conduct meetings to discuss localization efforts. For example: * To discuss additional topics that should be part of the minimum documentation set * To discuss issues with content language that is difficult to translate/localize Last reviewed on Mon Feb 28 2022 angular Introduction to forms in Angular Introduction to forms in Angular ================================ Handling user input with forms is the cornerstone of many common applications. Applications use forms to enable users to log in, to update a profile, to enter sensitive information, and to perform many other data-entry tasks. Angular provides two different approaches to handling user input through forms: reactive and template-driven. Both capture user input events from the view, validate the user input, create a form model and data model to update, and provide a way to track changes. This guide provides information to help you decide which type of form works best for your situation. It introduces the common building blocks used by both approaches. It also summarizes the key differences between the two approaches, and demonstrates those differences in the context of setup, data flow, and testing. Prerequisites ------------- This guide assumes that you have a basic understanding of the following. * [TypeScript](https://www.typescriptlang.org/ "The TypeScript language") and HTML5 programming * Angular app-design fundamentals, as described in [Angular Concepts](architecture "Introduction to Angular concepts") * The basics of [Angular template syntax](architecture-components#template-syntax "Template syntax intro") Choosing an approach -------------------- Reactive forms and template-driven forms process and manage form data differently. Each approach offers different advantages. | Forms | Details | | --- | --- | | Reactive forms | Provide direct, explicit access to the underlying form's object model. Compared to template-driven forms, they are more robust: they're more scalable, reusable, and testable. If forms are a key part of your application, or you're already using reactive patterns for building your application, use reactive forms. | | Template-driven forms | Rely on directives in the template to create and manipulate the underlying object model. They are useful for adding a simple form to an app, such as an email list signup form. They're straightforward to add to an app, but they don't scale as well as reactive forms. If you have very basic form requirements and logic that can be managed solely in the template, template-driven forms could be a good fit. | ### Key differences The following table summarizes the key differences between reactive and template-driven forms. | | Reactive | Template-driven | | --- | --- | --- | | [Setup of form model](forms-overview#setup) | Explicit, created in component class | Implicit, created by directives | | [Data model](forms-overview#mutability-of-the-data-model) | Structured and immutable | Unstructured and mutable | | [Data flow](forms-overview#data-flow-in-forms) | Synchronous | Asynchronous | | [Form validation](forms-overview#validation) | Functions | Directives | ### Scalability If forms are a central part of your application, scalability is very important. Being able to reuse form models across components is critical. Reactive forms are more scalable than template-driven forms. They provide direct access to the underlying form API, and use [synchronous data flow](forms-overview#data-flow-in-reactive-forms) between the view and the data model, which makes creating large-scale forms easier. Reactive forms require less setup for testing, and testing does not require deep understanding of change detection to properly test form updates and validation. Template-driven forms focus on simple scenarios and are not as reusable. They abstract away the underlying form API, and use [asynchronous data flow](forms-overview#data-flow-in-template-driven-forms) between the view and the data model. The abstraction of template-driven forms also affects testing. Tests are deeply reliant on manual change detection execution to run properly, and require more setup. Setting up the form model ------------------------- Both reactive and template-driven forms track value changes between the form input elements that users interact with and the form data in your component model. The two approaches share underlying building blocks, but differ in how you create and manage the common form-control instances. ### Common form foundation classes Both reactive and template-driven forms are built on the following base classes. | Base classes | Details | | --- | --- | | `[FormControl](../api/forms/formcontrol)` | Tracks the value and validation status of an individual form control. | | `[FormGroup](../api/forms/formgroup)` | Tracks the same values and status for a collection of form controls. | | `[FormArray](../api/forms/formarray)` | Tracks the same values and status for an array of form controls. | | `[ControlValueAccessor](../api/forms/controlvalueaccessor)` | Creates a bridge between Angular `[FormControl](../api/forms/formcontrol)` instances and built-in DOM elements. | ### Setup in reactive forms With reactive forms, you define the form model directly in the component class. The `[formControl]` directive links the explicitly created `[FormControl](../api/forms/formcontrol)` instance to a specific form element in the view, using an internal value accessor. The following component implements an input field for a single control, using reactive forms. In this example, the form model is the `[FormControl](../api/forms/formcontrol)` instance. ``` import { Component } from '@angular/core'; import { FormControl } from '@angular/forms'; @Component({ selector: 'app-reactive-favorite-color', template: ` Favorite Color: <input type="text" [formControl]="favoriteColorControl"> ` }) export class FavoriteColorComponent { favoriteColorControl = new FormControl(''); } ``` Figure 1 shows how, in reactive forms, the form model is the source of truth; it provides the value and status of the form element at any given point in time, through the `[formControl]` directive on the input element. **Figure 1.** *Direct access to forms model in a reactive form.* ### Setup in template-driven forms In template-driven forms, the form model is implicit, rather than explicit. The directive `[NgModel](../api/forms/ngmodel)` creates and manages a `[FormControl](../api/forms/formcontrol)` instance for a given form element. The following component implements the same input field for a single control, using template-driven forms. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-template-favorite-color', template: ` Favorite Color: <input type="text" [(ngModel)]="favoriteColor"> ` }) export class FavoriteColorComponent { favoriteColor = ''; } ``` In a template-driven form the source of truth is the template. You do not have direct programmatic access to the `[FormControl](../api/forms/formcontrol)` instance, as shown in Figure 2. **Figure 2.** *Indirect access to forms model in a template-driven form.* Data flow in forms ------------------ When an application contains a form, Angular must keep the view in sync with the component model and the component model in sync with the view. As users change values and make selections through the view, the new values must be reflected in the data model. Similarly, when the program logic changes values in the data model, those values must be reflected in the view. Reactive and template-driven forms differ in how they handle data flowing from the user or from programmatic changes. The following diagrams illustrate both kinds of data flow for each type of form, using the favorite-color input field defined above. ### Data flow in reactive forms In reactive forms each form element in the view is directly linked to the form model (a `[FormControl](../api/forms/formcontrol)` instance). Updates from the view to the model and from the model to the view are synchronous and do not depend on how the UI is rendered. The view-to-model diagram shows how data flows when an input field's value is changed from the view through the following steps. 1. The user types a value into the input element, in this case the favorite color *Blue*. 2. The form input element emits an "input" event with the latest value. 3. The control value accessor listening for events on the form input element immediately relays the new value to the `[FormControl](../api/forms/formcontrol)` instance. 4. The `[FormControl](../api/forms/formcontrol)` instance emits the new value through the `valueChanges` observable. 5. Any subscribers to the `valueChanges` observable receive the new value. The model-to-view diagram shows how a programmatic change to the model is propagated to the view through the following steps. 1. The user calls the `favoriteColorControl.setValue()` method, which updates the `[FormControl](../api/forms/formcontrol)` value. 2. The `[FormControl](../api/forms/formcontrol)` instance emits the new value through the `valueChanges` observable. 3. Any subscribers to the `valueChanges` observable receive the new value. 4. The control value accessor on the form input element updates the element with the new value. ### Data flow in template-driven forms In template-driven forms, each form element is linked to a directive that manages the form model internally. The view-to-model diagram shows how data flows when an input field's value is changed from the view through the following steps. 1. The user types *Blue* into the input element. 2. The input element emits an "input" event with the value *Blue*. 3. The control value accessor attached to the input triggers the `setValue()` method on the `[FormControl](../api/forms/formcontrol)` instance. 4. The `[FormControl](../api/forms/formcontrol)` instance emits the new value through the `valueChanges` observable. 5. Any subscribers to the `valueChanges` observable receive the new value. 6. The control value accessor also calls the `[NgModel.viewToModelUpdate()](../api/forms/ngmodel#viewToModelUpdate)` method which emits an `ngModelChange` event. 7. Because the component template uses two-way data binding for the `favoriteColor` property, the `favoriteColor` property in the component is updated to the value emitted by the `ngModelChange` event (*Blue*). The model-to-view diagram shows how data flows from model to view when the `favoriteColor` changes from *Blue* to *Red*, through the following steps 1. The `favoriteColor` value is updated in the component. 2. Change detection begins. 3. During change detection, the `ngOnChanges` lifecycle hook is called on the `[NgModel](../api/forms/ngmodel)` directive instance because the value of one of its inputs has changed. 4. The `ngOnChanges()` method queues an async task to set the value for the internal `[FormControl](../api/forms/formcontrol)` instance. 5. Change detection completes. 6. On the next tick, the task to set the `[FormControl](../api/forms/formcontrol)` instance value is executed. 7. The `[FormControl](../api/forms/formcontrol)` instance emits the latest value through the `valueChanges` observable. 8. Any subscribers to the `valueChanges` observable receive the new value. 9. The control value accessor updates the form input element in the view with the latest `favoriteColor` value. ### Mutability of the data model The change-tracking method plays a role in the efficiency of your application. | Forms | Details | | --- | --- | | Reactive forms | Keep the data model pure by providing it as an immutable data structure. Each time a change is triggered on the data model, the `[FormControl](../api/forms/formcontrol)` instance returns a new data model rather than updating the existing data model. This gives you the ability to track unique changes to the data model through the control's observable. Change detection is more efficient because it only needs to update on unique changes. Because data updates follow reactive patterns, you can integrate with observable operators to transform data. | | Template-driven forms | Rely on mutability with two-way data binding to update the data model in the component as changes are made in the template. Because there are no unique changes to track on the data model when using two-way data binding, change detection is less efficient at determining when updates are required. | The difference is demonstrated in the previous examples that use the favorite-color input element. * With reactive forms, the **`[FormControl](../api/forms/formcontrol)` instance** always returns a new value when the control's value is updated * With template-driven forms, the **favorite color property** is always modified to its new value Form validation --------------- Validation is an integral part of managing any set of forms. Whether you're checking for required fields or querying an external API for an existing username, Angular provides a set of built-in validators as well as the ability to create custom validators. | Forms | Details | | --- | --- | | Reactive forms | Define custom validators as **functions** that receive a control to validate | | Template-driven forms | Tied to template **directives**, and must provide custom validator directives that wrap validation functions | For more information, see [Form Validation](form-validation). Testing ------- Testing plays a large part in complex applications. A simpler testing strategy is useful when validating that your forms function correctly. Reactive forms and template-driven forms have different levels of reliance on rendering the UI to perform assertions based on form control and form field changes. The following examples demonstrate the process of testing forms with reactive and template-driven forms. ### Testing reactive forms Reactive forms provide a relatively straightforward testing strategy because they provide synchronous access to the form and data models, and they can be tested without rendering the UI. In these tests, status and data are queried and manipulated through the control without interacting with the change detection cycle. The following tests use the favorite-color components from previous examples to verify the view-to-model and model-to-view data flows for a reactive form. **Verifying view-to-model data flow** The first example performs the following steps to verify the view-to-model data flow. 1. Query the view for the form input element, and create a custom "input" event for the test. 2. Set the new value for the input to *Red*, and dispatch the "input" event on the form input element. 3. Assert that the component's `favoriteColorControl` value matches the value from the input. ``` it('should update the value of the input field', () => { const input = fixture.nativeElement.querySelector('input'); const event = createNewEvent('input'); input.value = 'Red'; input.dispatchEvent(event); expect(fixture.componentInstance.favoriteColorControl.value).toEqual('Red'); }); ``` The next example performs the following steps to verify the model-to-view data flow. 1. Use the `favoriteColorControl`, a `[FormControl](../api/forms/formcontrol)` instance, to set the new value. 2. Query the view for the form input element. 3. Assert that the new value set on the control matches the value in the input. ``` it('should update the value in the control', () => { component.favoriteColorControl.setValue('Blue'); const input = fixture.nativeElement.querySelector('input'); expect(input.value).toBe('Blue'); }); ``` ### Testing template-driven forms Writing tests with template-driven forms requires a detailed knowledge of the change detection process and an understanding of how directives run on each cycle to ensure that elements are queried, tested, or changed at the correct time. The following tests use the favorite color components mentioned earlier to verify the data flows from view to model and model to view for a template-driven form. The following test verifies the data flow from view to model. ``` it('should update the favorite color in the component', fakeAsync(() => { const input = fixture.nativeElement.querySelector('input'); const event = createNewEvent('input'); input.value = 'Red'; input.dispatchEvent(event); fixture.detectChanges(); expect(component.favoriteColor).toEqual('Red'); })); ``` Here are the steps performed in the view to model test. 1. Query the view for the form input element, and create a custom "input" event for the test. 2. Set the new value for the input to *Red*, and dispatch the "input" event on the form input element. 3. Run change detection through the test fixture. 4. Assert that the component `favoriteColor` property value matches the value from the input. The following test verifies the data flow from model to view. ``` it('should update the favorite color on the input field', fakeAsync(() => { component.favoriteColor = 'Blue'; fixture.detectChanges(); tick(); const input = fixture.nativeElement.querySelector('input'); expect(input.value).toBe('Blue'); })); ``` Here are the steps performed in the model to view test. 1. Use the component instance to set the value of the `favoriteColor` property. 2. Run change detection through the test fixture. 3. Use the `[tick](../api/core/testing/tick)()` method to simulate the passage of time within the `[fakeAsync](../api/core/testing/fakeasync)()` task. 4. Query the view for the form input element. 5. Assert that the input value matches the value of the `favoriteColor` property in the component instance. Next steps ---------- To learn more about reactive forms, see the following guides: * [Reactive forms](reactive-forms) * [Form validation](form-validation#reactive-form-validation) * [Dynamic forms](dynamic-form) To learn more about template-driven forms, see the following guides: * [Building a template-driven form](forms) tutorial * [Form validation](form-validation#template-driven-validation) * `[NgForm](../api/forms/ngform)` directive API reference Last reviewed on Mon Feb 28 2022
programming_docs
angular Animation transitions and triggers Animation transitions and triggers ================================== This guide goes into depth on special transition states such as the `*` wildcard and `void`. It shows how these special states are used for elements entering and leaving a view. This section also explores multiple animation triggers, animation callbacks, and sequence-based animation using keyframes. Predefined states and wildcard matching --------------------------------------- In Angular, transition states can be defined explicitly through the [`state()`](../api/animations/state) function, or using the predefined `*` wildcard and `void` states. ### Wildcard state An asterisk `*` or *wildcard* matches any animation state. This is useful for defining transitions that apply regardless of the HTML element's start or end state. For example, a transition of `open => *` applies when the element's state changes from open to anything else. The following is another code sample using the wildcard state together with the previous example using the `open` and `closed` states. Instead of defining each state-to-state transition pair, any transition to `closed` takes 1 second, and any transition to `open` takes 0.5 seconds. This allows the addition of new states without having to include separate transitions for each one. ``` animations: [ trigger('openClose', [ // ... state('open', style({ height: '200px', opacity: 1, backgroundColor: 'yellow' })), state('closed', style({ height: '100px', opacity: 0.8, backgroundColor: 'blue' })), transition('* => closed', [ animate('1s') ]), transition('* => open', [ animate('0.5s') ]), ]), ], ``` Use a double arrow syntax to specify state-to-state transitions in both directions. ``` transition('open <=> closed', [ animate('0.5s') ]), ``` ### Use wildcard state with multiple transition states In the two-state button example, the wildcard isn't that useful because there are only two possible states, `open` and `closed`. In general, use wildcard states when an element has multiple potential states that it can change to. If the button can change from `open` to either `closed` or something like `inProgress`, using a wildcard state could reduce the amount of coding needed. ``` animations: [ trigger('openClose', [ // ... state('open', style({ height: '200px', opacity: 1, backgroundColor: 'yellow' })), state('closed', style({ height: '100px', opacity: 0.8, backgroundColor: 'blue' })), transition('open => closed', [ animate('1s') ]), transition('closed => open', [ animate('0.5s') ]), transition('* => closed', [ animate('1s') ]), transition('* => open', [ animate('0.5s') ]), transition('open <=> closed', [ animate('0.5s') ]), transition ('* => open', [ animate ('1s', style ({ opacity: '*' }), ), ]), transition('* => *', [ animate('1s') ]), ``` The `* => *` transition applies when any change between two states takes place. Transitions are matched in the order in which they are defined. Thus, you can apply other transitions on top of the `* => *` transition. For example, define style changes or animations that would apply just to `open => closed`, then use `* => *` as a fallback for state pairings that aren't otherwise called out. To do this, list the more specific transitions *before* `* => *`. ### Use wildcards with styles Use the wildcard `*` with a style to tell the animation to use whatever the current style value is, and animate with that. Wildcard is a fallback value that's used if the state being animated isn't declared within the trigger. ``` transition ('* => open', [ animate ('1s', style ({ opacity: '*' }), ), ]), ``` ### Void state Use the `void` state to configure transitions for an element that is entering or leaving a page. See [Animating entering and leaving a view](transition-and-triggers#enter-leave-view). ### Combine wildcard and void states Combine wildcard and void states in a transition to trigger animations that enter and leave the page: * A transition of `* => void` applies when the element leaves a view, regardless of what state it was in before it left * A transition of `void => *` applies when the element enters a view, regardless of what state it assumes when entering * The wildcard state `*` matches to *any* state, including `void` Animate entering and leaving a view ----------------------------------- This section shows how to animate elements entering or leaving a page. Add a new behavior: * When you add a hero to the list of heroes, it appears to fly onto the page from the left * When you remove a hero from the list, it appears to fly out to the right ``` animations: [ trigger('flyInOut', [ state('in', style({ transform: 'translateX(0)' })), transition('void => *', [ style({ transform: 'translateX(-100%)' }), animate(100) ]), transition('* => void', [ animate(100, style({ transform: 'translateX(100%)' })) ]) ]) ] ``` In the preceding code, you applied the `void` state when the HTML element isn't attached to a view. Aliases :enter and :leave ------------------------- `:enter` and `:leave` are aliases for the `void => *` and `* => void` transitions. These aliases are used by several animation functions. ``` transition ( ':enter', [ … ] ); // alias for void => * transition ( ':leave', [ … ] ); // alias for * => void ``` It's harder to target an element that is entering a view because it isn't in the DOM yet. Use the aliases `:enter` and `:leave` to target HTML elements that are inserted or removed from a view. ### Use `*[ngIf](../api/common/ngif)` and `*[ngFor](../api/common/ngfor)` with :enter and :leave The `:enter` transition runs when any `*[ngIf](../api/common/ngif)` or `*[ngFor](../api/common/ngfor)` views are placed on the page, and `:leave` runs when those views are removed from the page. > **NOTE**: Entering/leaving behaviors can sometime be confusing. As a rule of thumb consider that any element being added to the DOM by Angular passes via the `:enter` transition. Only elements being directly removed from the DOM by Angular pass via the `:leave` transition. For example, an element's view is removed from the DOM because its parent is being removed from the DOM. > > This example has a special trigger for the enter and leave animation called `myInsertRemoveTrigger`. The HTML template contains the following code. ``` <div @myInsertRemoveTrigger *ngIf="isShown" class="insert-remove-container"> <p>The box is inserted</p> </div> ``` In the component file, the `:enter` transition sets an initial opacity of 0. It then animates it to change that opacity to 1 as the element is inserted into the view. ``` trigger('myInsertRemoveTrigger', [ transition(':enter', [ style({ opacity: 0 }), animate('100ms', style({ opacity: 1 })), ]), transition(':leave', [ animate('100ms', style({ opacity: 0 })) ]) ]), ``` Note that this example doesn't need to use [`state()`](../api/animations/state). Transition :increment and :decrement ------------------------------------ The `[transition](../api/animations/transition)()` function takes other selector values, `:increment` and `:decrement`. Use these to kick off a transition when a numeric value has increased or decreased in value. > **NOTE**: The following example uses `[query](../api/animations/query)()` and `[stagger](../api/animations/stagger)()` methods. For more information on these methods, see the [complex sequences](complex-animation-sequences#complex-sequence) page. > > ``` trigger('filterAnimation', [ transition(':enter, * => 0, * => -1', []), transition(':increment', [ query(':enter', [ style({ opacity: 0, width: 0 }), stagger(50, [ animate('300ms ease-out', style({ opacity: 1, width: '*' })), ]), ], { optional: true }) ]), transition(':decrement', [ query(':leave', [ stagger(50, [ animate('300ms ease-out', style({ opacity: 0, width: 0 })), ]), ]) ]), ]), ``` Boolean values in transitions ----------------------------- If a trigger contains a Boolean value as a binding value, then this value can be matched using a `[transition](../api/animations/transition)()` expression that compares `true` and `false`, or `1` and `0`. ``` <div [@openClose]="isOpen ? true : false" class="open-close-container"> </div> ``` In the code snippet above, the HTML template binds a `<div>` element to a trigger named `openClose` with a status expression of `isOpen`, and with possible values of `true` and `false`. This pattern is an alternative to the practice of creating two named states like `open` and `close`. Inside the `@[Component](../api/core/component)` metadata under the `animations:` property, when the state evaluates to `true`, the associated HTML element's height is a wildcard style or default. In this case, the animation uses whatever height the element already had before the animation started. When the element is `closed`, the element gets animated to a height of 0, which makes it invisible. ``` animations: [ trigger('openClose', [ state('true', style({ height: '*' })), state('false', style({ height: '0px' })), transition('false <=> true', animate(500)) ]) ], ``` Multiple animation triggers --------------------------- You can define more than one animation trigger for a component. Attach animation triggers to different elements, and the parent-child relationships among the elements affect how and when the animations run. ### Parent-child animations Each time an animation is triggered in Angular, the parent animation always gets priority and child animations are blocked. For a child animation to run, the parent animation must query each of the elements containing child animations. It then lets the animations run using the [`animateChild()`](../api/animations/animatechild) function. #### Disable an animation on an HTML element A special animation control binding called `@.disabled` can be placed on an HTML element to turn off animations on that element, as well as any nested elements. When true, the `@.disabled` binding prevents all animations from rendering. The following code sample shows how to use this feature. ``` <div [@.disabled]="isDisabled"> <div [@childAnimation]="isOpen ? 'open' : 'closed'" class="open-close-container"> <p>The box is now {{ isOpen ? 'Open' : 'Closed' }}!</p> </div> </div> ``` ``` @Component({ animations: [ trigger('childAnimation', [ // ... ]), ], }) export class OpenCloseChildComponent { isDisabled = false; isOpen = false; } ``` When the `@.disabled` binding is true, the `@childAnimation` trigger doesn't kick off. When an element within an HTML template has animations turned off using the `@.disabled` host binding, animations are turned off on all inner elements as well. You can't selectively turn off multiple animations on a single element. A selective child animations can still be run on a disabled parent in one of the following ways: * A parent animation can use the [`query()`](../api/animations/query) function to collect inner elements located in disabled areas of the HTML template. Those elements can still animate. * A child animation can be queried by a parent and then later animated with the `[animateChild](../api/animations/animatechild)()` function #### Disable all animations To turn off all animations for an Angular application, place the `@.disabled` host binding on the topmost Angular component. ``` @Component({ selector: 'app-root', templateUrl: 'app.component.html', styleUrls: ['app.component.css'], animations: [ slideInAnimation ] }) export class AppComponent { @HostBinding('@.disabled') public animationsDisabled = false; } ``` > **NOTE**: Disabling animations application-wide is useful during end-to-end (E2E) testing. > > Animation callbacks ------------------- The animation `[trigger](../api/animations/trigger)()` function emits *callbacks* when it starts and when it finishes. The following example features a component that contains an `openClose` trigger. ``` @Component({ selector: 'app-open-close', animations: [ trigger('openClose', [ // ... ]), ], templateUrl: 'open-close.component.html', styleUrls: ['open-close.component.css'] }) export class OpenCloseComponent { onAnimationEvent(event: AnimationEvent) { } } ``` In the HTML template, the animation event is passed back via `$event`, as `@triggerName.start` and `@triggerName.done`, where `triggerName` is the name of the trigger being used. In this example, the trigger `openClose` appears as follows. ``` <div [@openClose]="isOpen ? 'open' : 'closed'" (@openClose.start)="onAnimationEvent($event)" (@openClose.done)="onAnimationEvent($event)" class="open-close-container"> </div> ``` A potential use for animation callbacks could be to cover for a slow API call, such as a database lookup. For example, an **InProgress** button can be set up to have its own looping animation while the backend system operation finishes. Another animation can be called when the current animation finishes. For example, the button goes from the `inProgress` state to the `closed` state when the API call is completed. An animation can influence an end user to *perceive* the operation as faster, even when it is not. Callbacks can serve as a debugging tool, for example in conjunction with `console.warn()` to view the application's progress in a browser's Developer JavaScript Console. The following code snippet creates console log output for the original example, a button with the two states of `open` and `closed`. ``` export class OpenCloseComponent { onAnimationEvent(event: AnimationEvent) { // openClose is trigger name in this example console.warn(`Animation Trigger: ${event.triggerName}`); // phaseName is "start" or "done" console.warn(`Phase: ${event.phaseName}`); // in our example, totalTime is 1000 (number of milliseconds in a second) console.warn(`Total time: ${event.totalTime}`); // in our example, fromState is either "open" or "closed" console.warn(`From: ${event.fromState}`); // in our example, toState either "open" or "closed" console.warn(`To: ${event.toState}`); // the HTML element itself, the button in this case console.warn(`Element: ${event.element}`); } } ``` Keyframes --------- To create an animation with multiple steps run in sequence, use *keyframes*. Angular's `keyframe()` function allows several style changes within a single timing segment. For example, the button, instead of fading, could change color several times over a single 2-second time span. The code for this color change might look like this. ``` transition('* => active', [ animate('2s', keyframes([ style({ backgroundColor: 'blue' }), style({ backgroundColor: 'red' }), style({ backgroundColor: 'orange' }) ])) ``` ### Offset Keyframes include an `offset` that defines the point in the animation where each style change occurs. Offsets are relative measures from zero to one, marking the beginning and end of the animation. They should be applied to each of the keyframe steps if used at least once. Defining offsets for keyframes is optional. If you omit them, evenly spaced offsets are automatically assigned. For example, three keyframes without predefined offsets receive offsets of 0, 0.5, and 1. Specifying an offset of 0.8 for the middle transition in the preceding example might look like this. The code with offsets specified would be as follows. ``` transition('* => active', [ animate('2s', keyframes([ style({ backgroundColor: 'blue', offset: 0}), style({ backgroundColor: 'red', offset: 0.8}), style({ backgroundColor: '#754600', offset: 1.0}) ])), ]), transition('* => inactive', [ animate('2s', keyframes([ style({ backgroundColor: '#754600', offset: 0}), style({ backgroundColor: 'red', offset: 0.2}), style({ backgroundColor: 'blue', offset: 1.0}) ])) ]), ``` You can combine keyframes with `duration`, `delay`, and `easing` within a single animation. ### Keyframes with a pulsation Use keyframes to create a pulse effect in your animations by defining styles at specific offset throughout the animation. Here's an example of using keyframes to create a pulse effect: * The original `open` and `closed` states, with the original changes in height, color, and opacity, occurring over a timeframe of 1 second * A keyframes sequence inserted in the middle that causes the button to appear to pulsate irregularly over the course of that same 1 second timeframe The code snippet for this animation might look like this. ``` trigger('openClose', [ state('open', style({ height: '200px', opacity: 1, backgroundColor: 'yellow' })), state('close', style({ height: '100px', opacity: 0.5, backgroundColor: 'green' })), // ... transition('* => *', [ animate('1s', keyframes ( [ style({ opacity: 0.1, offset: 0.1 }), style({ opacity: 0.6, offset: 0.2 }), style({ opacity: 1, offset: 0.5 }), style({ opacity: 0.2, offset: 0.7 }) ])) ]) ]) ``` ### Animatable properties and units Angular animations support builds on top of web animations, so you can animate any property that the browser considers animatable. This includes positions, sizes, transforms, colors, borders, and more. The W3C maintains a list of animatable properties on its [CSS Transitions](https://www.w3.org/TR/css-transitions-1) page. For properties with a numeric value, define a unit by providing the value as a string, in quotes, with the appropriate suffix: * 50 pixels: `'50px'` * Relative font size: `'3em'` * Percentage: `'100%'` You can also provide the value as a number. In such cases Angular assumes a default unit of pixels, or `px`. Expressing 50 pixels as `50` is the same as saying `'50px'`. > **NOTE**: The string `"50"` would instead not be considered valid). > > ### Automatic property calculation with wildcards Sometimes, the value of a dimensional style property isn't known until runtime. For example, elements often have widths and heights that depend on their content or the screen size. These properties are often challenging to animate using CSS. In these cases, you can use a special wildcard `*` property value under `[style](../api/animations/style)()`. The value of that particular style property is computed at runtime and then plugged into the animation. The following example has a trigger called `shrinkOut`, used when an HTML element leaves the page. The animation takes whatever height the element has before it leaves, and animates from that height to zero. ``` animations: [ trigger('shrinkOut', [ state('in', style({ height: '*' })), transition('* => void', [ style({ height: '*' }), animate(250, style({ height: 0 })) ]) ]) ] ``` ### Keyframes summary The `[keyframes](../api/animations/keyframes)()` function in Angular allows you to specify multiple interim styles within a single transition. An optional `offset` can be used to define the point in the animation where each style change should occur. More on Angular animations -------------------------- You might also be interested in the following: * [Introduction to Angular animations](animations) * [Complex animation sequences](complex-animation-sequences) * [Reusable animations](reusable-animations) * [Route transition animations](route-animations) Last reviewed on Tue Oct 11 2022 angular Example applications Example applications ==================== The following is a list of the example applications in the [Angular documentation](docs). Fundamentals ------------ These examples demonstrate minimal, fundamental concepts. ### Getting started application Introductory application demonstrating Angular features. For more information, see [Getting started](start). ### Launching your app Demonstrates the Angular bootstrapping process. For more information, see [Launching your app with a root module](bootstrapping). ### Structure of Angular applications Demonstrates the fundamental architecture of Angular applications. For more information, see [Introduction to Angular concepts](architecture). Tour of Heroes tutorial application ----------------------------------- The Tour of Heroes is a comprehensive tutorial that guides you through the process of building an application with many of Angular's most popular features. ### Tour of Heroes: completed application Completed Tour of Heroes example application. For more information, see [Tour of Heroes app and tutorial](../tutorial/tour-of-heroes). ### Tour of Heroes: Creating an application Initial Tour of Heroes example application for beginning the tutorial. For more information, see [Create a new project](../tutorial/tour-of-heroes/toh-pt0). ### Tour of Heroes: The hero editor First step of the Tour of Heroes example application. For more information, see [The hero editor](../tutorial/tour-of-heroes/toh-pt1). ### Tour of Heroes: Display a selection list Second step of the Tour of Heroes example application. For more information, see [Display a selection list](../tutorial/tour-of-heroes/toh-pt2). ### Tour of Heroes: Create a feature component Third step of the Tour of Heroes example application. For more information, see [Create a feature component](../tutorial/tour-of-heroes/toh-pt3). ### Tour of Heroes: Add services Fourth step of the Tour of Heroes example application. For more information, see [Add services](../tutorial/tour-of-heroes/toh-pt4). ### Tour of Heroes: Add in-app navigation with routing Fifth step of the Tour of Heroes example application. For more information, see [Add in-app navigation with routing](../tutorial/tour-of-heroes/toh-pt5). ### Tour of Heroes: Get data from a server Sixth and final step of the Tour of Heroes example application. For more information, see [Get data from a server](../tutorial/tour-of-heroes/toh-pt6). Working with templates ---------------------- These examples demonstrate features of Angular templates. ### Accessibility Demonstrates building Angular applications in a more accessible way. For more information, see [Accessibility](accessibility). ### Animations Demonstrates Angular's animation features. For more information, see [Introduction to Angular animations](animations). ### Attribute, class, and style bindings Demonstrates Angular attribute, class, and style bindings. For more information, see [Attribute, class, and style bindings](attribute-binding). ### Attribute directives Demonstrates Angular attribute directives. For more information, see [Attribute directives](attribute-directives). ### Binding syntax Demonstrates Angular's binding syntax. For more information, see [Binding syntax: an overview](binding-syntax). ### Built-in directives Demonstrates Angular built-in directives. For more information, see [Built-in directives](built-in-directives). ### Built-in template functions Demonstrates Angular built-in template functions. For more information, see the [`$any()` type cast function section](template-expression-operators#the-any-type-cast-function) of [Template expression operators](template-expression-operators). ### Content projection Demonstrates how to use Angular's content projection feature when creating reusable components. ### Interpolation Demonstrates Angular interpolation. For more information, see [Interpolation and template expressions](interpolation). ### Template expression operators Demonstrates expression operators in Angular templates. For more information, see [Template expression operators](template-expression-operators). ### Template reference variables Demonstrates Angular's template reference variables. For more information, see [Template reference variables](template-reference-variables). ### `<ngcontainer>` Demonstrates `<ngcontainer>`. For more information, see the [ng-container section](built-in-directives#ngcontainer) of [Built-in directives](structural-directives) . ### Pipes Demonstrates Angular pipes. For more information, see [Transforming Data Using Pipes](pipes). ### Property binding Demonstrates property binding in Angular. For more information, see [Property binding](property-binding). ### Structural directives Demonstrates Angular structural directives. For more information, see [Structural directives](structural-directives). ### Two-way binding Demonstrates two-way data binding in Angular applications. For more information, see [Two-way binding](two-way-binding). ### Template syntax Comprehensive demonstration of Angular's template syntax. For more information, see [Template reference variables](template-syntax). ### User input Demonstrates responding to user actions. For more information, see [User input](user-input). Working with components ----------------------- These examples demonstrate features of Angular components. ### Component interaction Demonstrates how Angular shares data between components. For more information, see [Component interaction](component-interaction). ### Component styles Demonstrates styling in Angular applications. For more information, see [Component styles](component-styles). ### Dynamic component loader Demonstrates how to dynamically load components. For more information, see [Dynamic component loader](dynamic-component-loader). ### Elements Demonstrates using Angular custom elements. For more information, see [Angular elements overview](elements). ### Event binding Demonstrates binding to events in Angular. For more information, see [Event binding](event-binding). ### `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` Demonstrates `@[Input](../api/core/input)()` and `@[Output](../api/core/output)()` in components and directives. For more information, see [`@Input()` and `@Output()` properties](inputs-outputs). ### Lifecycle hooks Demonstrates Angular lifecycle hooks such as `ngOnInit()` and `ngOnChanges()`. For more information, see [Hooking into the component lifecycle](lifecycle-hooks). Dependency injection -------------------- ### Dependency injection fundamentals Demonstrates fundamentals of Angular dependency injection. For more information, see [Dependency injection](dependency-injection). ### Dependency injection features Demonstrates many of the features of Angular dependency injection. For more information, see [Dependency injection in action](dependency-injection). ### Providing dependencies in NgModules Demonstrates providing services in NgModules. For more information, see [Providing dependencies in modules](providers). ### Hierarchical dependency injection Demonstrates Angular injector trees and resolution modifiers. For more information, see [Hierarchical injectors](hierarchical-dependency-injection). ### Dependency injection with `providers` and `viewProviders` Demonstrates how `providers` and `viewproviders` affect dependency injection. For more information, see the [Providing services in `@Component()`](hierarchical-dependency-injection#providing-services-in-component) section of [Hierarchical injectors](hierarchical-dependency-injection). ### Resolution modifiers and dependency injection Demonstrates Angular's resolution modifiers, such as `@[Self](../api/core/self)()`. For more information, see the [Modifying service visibility](hierarchical-dependency-injection#modifying-service-visibility) section of [Hierarchical injectors](hierarchical-dependency-injection). Forms ----- ### Forms overview Demonstrates foundational concepts of Angular forms. For more information, see [Introduction to forms in Angular](forms-overview). ### Reactive forms Demonstrates Angular's reactive forms. For more information, see [Reactive forms](reactive-forms). ### Template-driven forms Demonstrates Angular template-driven forms. For more information, see [Building a template-driven form](forms). ### Form validation Demonstrates validating forms in Angular. For more information, see [Validating form input](form-validation). ### Dynamic forms Demonstrates creating dynamic forms. For more information, see [Building dynamic forms](dynamic-form). NgModules --------- ### NgModules Demonstrates fundamentals of NgModules. For more information, see [NgModules](ngmodules). ### Feature modules Demonstrates using feature modules in Angular. For more information, see [Feature modules](feature-modules). ### Lazy loading NgModules Demonstrates lazy loading NgModules. For more information, see [Lazy-loading feature modules](lazy-loading-ngmodules). Routing ------- ### Router Demonstrates Angular's routing features. For more information, see [Router](router). ### Router tutorial Demonstrates Angular's fundamental routing techniques. For more information, see [Using Angular routes in a single-page application](router-tutorial). Documentation ------------- ### Style guide for Documentation contributions Demonstrates Angular documentation style guidelines. For more information, see [Angular documentation style guide](docs-style-guide). Server communication -------------------- ### `[HttpClient](../api/common/http/httpclient)` Demonstrates server interaction using HTTP. For more information, see [Communicating with backend services using HTTP](http). Workflow -------- ### Security Demonstrates security concepts in Angular applications. For more information, see [Security](security). ### Testing For the sample application that the testing guides describe, see the sample app. Demonstrates techniques for testing Angular. For more information, see [Testing](testing). Hybrid Angular applications --------------------------- ### AngularJS to Angular concepts: Quick reference Demonstrates Angular for those with an AngularJS background. For more information, see [AngularJS to Angular concepts: Quick reference](ajs-quick-reference). Last reviewed on Mon Feb 28 2022
programming_docs
angular Testing Attribute Directives Testing Attribute Directives ============================ An *attribute directive* modifies the behavior of an element, component or another directive. Its name reflects the way the directive is applied: as an attribute on a host element. > If you'd like to experiment with the application that this guide describes, run it in your browser or download and run it locally. > > Testing the `HighlightDirective` -------------------------------- The sample application's `HighlightDirective` sets the background color of an element based on either a data bound color or a default color (lightgray). It also sets a custom property of the element (`customProperty`) to `true` for no reason other than to show that it can. ``` import { Directive, ElementRef, Input, OnChanges } from '@angular/core'; @Directive({ selector: '[highlight]' }) /** * Set backgroundColor for the attached element to highlight color * and set the element's customProperty to true */ export class HighlightDirective implements OnChanges { defaultColor = 'rgb(211, 211, 211)'; // lightgray @Input('highlight') bgColor = ''; constructor(private el: ElementRef) { el.nativeElement.style.customProperty = true; } ngOnChanges() { this.el.nativeElement.style.backgroundColor = this.bgColor || this.defaultColor; } } ``` It's used throughout the application, perhaps most simply in the `AboutComponent`: ``` import { Component } from '@angular/core'; @Component({ template: ` <h2 highlight="skyblue">About</h2> <h3>Quote of the day:</h3> <twain-quote></twain-quote> ` }) export class AboutComponent { } ``` Testing the specific use of the `HighlightDirective` within the `AboutComponent` requires only the techniques explored in the ["Nested component tests"](testing-components-scenarios#nested-component-tests) section of [Component testing scenarios](testing-components-scenarios). ``` beforeEach(() => { fixture = TestBed.configureTestingModule({ declarations: [ AboutComponent, HighlightDirective ], schemas: [ CUSTOM_ELEMENTS_SCHEMA ] }) .createComponent(AboutComponent); fixture.detectChanges(); // initial binding }); it('should have skyblue <h2>', () => { const h2: HTMLElement = fixture.nativeElement.querySelector('h2'); const bgColor = h2.style.backgroundColor; expect(bgColor).toBe('skyblue'); }); ``` However, testing a single use case is unlikely to explore the full range of a directive's capabilities. Finding and testing all components that use the directive is tedious, brittle, and almost as unlikely to afford full coverage. *Class-only tests* might be helpful, but attribute directives like this one tend to manipulate the DOM. Isolated unit tests don't touch the DOM and, therefore, do not inspire confidence in the directive's efficacy. A better solution is to create an artificial test component that demonstrates all ways to apply the directive. ``` @Component({ template: ` <h2 highlight="yellow">Something Yellow</h2> <h2 highlight>The Default (Gray)</h2> <h2>No Highlight</h2> <input #box [highlight]="box.value" value="cyan"/>` }) class TestComponent { } ``` > The `<input>` case binds the `HighlightDirective` to the name of a color value in the input box. The initial value is the word "cyan" which should be the background color of the input box. > > Here are some tests of this component: ``` beforeEach(() => { fixture = TestBed.configureTestingModule({ declarations: [ HighlightDirective, TestComponent ] }) .createComponent(TestComponent); fixture.detectChanges(); // initial binding // all elements with an attached HighlightDirective des = fixture.debugElement.queryAll(By.directive(HighlightDirective)); // the h2 without the HighlightDirective bareH2 = fixture.debugElement.query(By.css('h2:not([highlight])')); }); // color tests it('should have three highlighted elements', () => { expect(des.length).toBe(3); }); it('should color 1st <h2> background "yellow"', () => { const bgColor = des[0].nativeElement.style.backgroundColor; expect(bgColor).toBe('yellow'); }); it('should color 2nd <h2> background w/ default color', () => { const dir = des[1].injector.get(HighlightDirective) as HighlightDirective; const bgColor = des[1].nativeElement.style.backgroundColor; expect(bgColor).toBe(dir.defaultColor); }); it('should bind <input> background to value color', () => { // easier to work with nativeElement const input = des[2].nativeElement as HTMLInputElement; expect(input.style.backgroundColor) .withContext('initial backgroundColor') .toBe('cyan'); input.value = 'green'; // Dispatch a DOM event so that Angular responds to the input value change. input.dispatchEvent(new Event('input')); fixture.detectChanges(); expect(input.style.backgroundColor) .withContext('changed backgroundColor') .toBe('green'); }); it('bare <h2> should not have a customProperty', () => { expect(bareH2.properties['customProperty']).toBeUndefined(); }); ``` A few techniques are noteworthy: * The `By.directive` predicate is a great way to get the elements that have this directive *when their element types are unknown* * The [`:not` pseudo-class](https://developer.mozilla.org/docs/Web/CSS/:not) in `By.css('h2:not([highlight])')` helps find `<h2>` elements that *do not* have the directive. `By.css('*:not([highlight])')` finds *any* element that does not have the directive. * `[DebugElement.styles](../api/core/debugelement#styles)` affords access to element styles even in the absence of a real browser, thanks to the `[DebugElement](../api/core/debugelement)` abstraction. But feel free to exploit the `nativeElement` when that seems easier or more clear than the abstraction. * Angular adds a directive to the injector of the element to which it is applied. The test for the default color uses the injector of the second `<h2>` to get its `HighlightDirective` instance and its `defaultColor`. * `[DebugElement.properties](../api/core/debugelement#properties)` affords access to the artificial custom property that is set by the directive Last reviewed on Mon Feb 28 2022 angular Server-side rendering (SSR) with Angular Universal Server-side rendering (SSR) with Angular Universal ================================================== This guide describes **Angular Universal**, a technology that renders Angular applications on the server. A normal Angular application executes in the *browser*, rendering pages in the DOM in response to user actions. Angular Universal executes on the *server*, generating *static* application pages that later get bootstrapped on the client. This means that the application generally renders more quickly, giving users a chance to view the application layout before it becomes fully interactive. For a more detailed look at different techniques and concepts surrounding SSR, check out this [article](https://developers.google.com/web/updates/2019/02/rendering-on-the-web). Easily prepare an application for server-side rendering using the [Angular CLI](glossary#cli). The CLI schematic `@nguniversal/express-engine` performs the required steps, as described. > Angular Universal requires an [active LTS or maintenance LTS](https://nodejs.org/about/releases) version of Node.js. See the `engines` property in the [package.json](https://unpkg.com/browse/@angular/platform-server/package.json) file to learn about the currently supported versions. > > > **NOTE**: Download the finished sample code, which runs in a [Node.js® Express](https://expressjs.com) server. > > Universal tutorial ------------------ The [Tour of Heroes tutorial](../tutorial/tour-of-heroes) is the foundation for this walkthrough. In this example, the Angular CLI compiles and bundles the Universal version of the application with the [Ahead-of-Time (AOT) compiler](aot-compiler). A Node.js Express web server compiles HTML pages with Universal based on client requests. To create the server-side application module, `app.server.module.ts`, run the following CLI command. ``` ng add @nguniversal/express-engine ``` The command creates the following folder structure. ``` src index.html // &lt;-- app web page main.ts // &lt;-- bootstrapper for client app main.server.ts // &lt;-- * bootstrapper for server app style.css // &lt;-- styles for the app app/ … // &lt;-- application code app.server.module.ts // &lt;-- * server-side application module server.ts // &lt;-- * express web server tsconfig.json // &lt;-- TypeScript base configuration tsconfig.app.json // &lt;-- TypeScript browser application configuration tsconfig.server.json // &lt;-- TypeScript server application configuration tsconfig.spec.json // &lt;-- TypeScript tests configuration ``` The files marked with `*` are new and not in the original tutorial sample. ### Universal in action To start rendering your application with Universal on your local system, use the following command. ``` npm run dev:ssr ``` Open a browser and navigate to `http://localhost:4200`. You should see the familiar Tour of Heroes dashboard page. Navigation using `routerLinks` works correctly because they use the built-in anchor (`<a>`) elements. You can go from the Dashboard to the Heroes page and back. Click a hero on the Dashboard page to display its Details page. If you throttle your network speed so that the client-side scripts take longer to download (instructions following), you'll notice: * You can't add or delete a hero * The search box on the Dashboard page is ignored * The *Back* and *Save* buttons on the Details page don't work User events other than `[routerLink](../api/router/routerlink)` clicks aren't supported. You must wait for the full client application to bootstrap and run, or buffer the events using libraries like [preboot](https://github.com/angular/preboot), which lets you replay these events once the client-side scripts load. The transition from the server-rendered application to the client application happens quickly on a development machine, but you should always test your applications in real-world scenarios. You can simulate a slower network to see the transition more clearly as follows: 1. Open the Chrome Dev Tools and go to the Network tab. 2. Find the [Network Throttling](https://developers.google.com/web/tools/chrome-devtools/network-performance/reference#throttling) dropdown on the far right of the menu bar. 3. Try one of the "3G" speeds. The server-rendered application still launches quickly but the full client application might take seconds to load. Why use server-side rendering? ------------------------------ There are three main reasons to create a Universal version of your application. * Facilitate web crawlers through [search engine optimization (SEO)](https://static.googleusercontent.com/media/www.google.com/en//webmasters/docs/search-engine-optimization-starter-guide.pdf) * Improve performance on mobile and low-powered devices * Show the first page quickly with a [first-contentful paint (FCP)](https://developers.google.com/web/tools/lighthouse/audits/first-contentful-paint) ### Facilitate web crawlers (SEO) Google, Bing, Facebook, Twitter, and other social media sites rely on web crawlers to index your application content and make that content searchable on the web. These web crawlers might be unable to navigate and index your highly interactive Angular application as a human user could do. Angular Universal can generate a static version of your application that is easily searchable, linkable, and navigable without JavaScript. Universal also makes a site preview available because each URL returns a fully rendered page. ### Improve performance on mobile and low-powered devices Some devices don't support JavaScript or execute JavaScript so poorly that the user experience is unacceptable. For these cases, you might require a server-rendered, no-JavaScript version of the application. This version, however limited, might be the only practical alternative for people who otherwise couldn't use the application at all. ### Show the first page quickly Displaying the first page quickly can be critical for user engagement. Pages that load faster perform better, [even with changes as small as 100ms](https://web.dev/shopping-for-speed-on-ebay). Your application might have to launch faster to engage these users before they decide to do something else. With Angular Universal, you can generate landing pages for the application that look like the complete application. The pages are pure HTML, and can display even if JavaScript is disabled. The pages don't handle browser events, but they *do* support navigation through the site using [`routerLink`](router-reference#router-link). In practice, you'll serve a static version of the landing page to hold the user's attention. At the same time, you'll load the full Angular application behind it. The user perceives near-instant performance from the landing page and gets the full interactive experience after the full application loads. Universal web servers --------------------- A Universal web server responds to application page requests with static HTML rendered by the [Universal template engine](universal#universal-engine). The server receives and responds to HTTP requests from clients (usually browsers), and serves static assets such as scripts, CSS, and images. It might respond to data requests, either directly or as a proxy to a separate data server. The sample web server for this guide is based on the popular [Express](https://expressjs.com) framework. > **NOTE**: *Any* web server technology can serve a Universal application as long as it can call Universal's `[renderModule](../api/platform-server/rendermodule)()` function. The principles and decision points discussed here apply to any web server technology. > > Universal applications use the Angular `platform-server` package (as opposed to `platform-browser`), which provides server implementations of the DOM, `XMLHttpRequest`, and other low-level features that don't rely on a browser. The server ([Node.js Express](https://expressjs.com) in this guide's example) passes client requests for application pages to the NgUniversal `ngExpressEngine`. Under the hood, this calls Universal's `[renderModule](../api/platform-server/rendermodule)()` function, while providing caching and other helpful utilities. The `[renderModule](../api/platform-server/rendermodule)()` function takes as inputs a *template* HTML page (usually `index.html`), an Angular *module* containing components, and a *route* that determines which components to display. The route comes from the client's request to the server. Each request results in the appropriate view for the requested route. The `[renderModule](../api/platform-server/rendermodule)()` function renders the view within the `<app>` tag of the template, creating a finished HTML page for the client. Finally, the server returns the rendered page to the client. ### Working around the browser APIs Because a Universal application doesn't execute in the browser, some of the browser APIs and capabilities might be missing on the server. For example, server-side applications can't reference browser-only global objects such as `window`, `document`, `navigator`, or `location`. Angular provides some injectable abstractions over these objects, such as [`Location`](../api/common/location) or [`DOCUMENT`](../api/common/document); it might substitute adequately for these APIs. If Angular doesn't provide it, it's possible to write new abstractions that delegate to the browser APIs while in the browser and to an alternative implementation while on the server (also known as shimming). Similarly, without mouse or keyboard events, a server-side application can't rely on a user clicking a button to show a component. The application must determine what to render based solely on the incoming client request. This is a good argument for making the application [routable](router). ### Universal template engine The important bit in the `server.ts` file is the `ngExpressEngine()` function. ``` // Our Universal express-engine (found @ https://github.com/angular/universal/tree/main/modules/express-engine) server.engine('html', ngExpressEngine({ bootstrap: AppServerModule, })); ``` The `ngExpressEngine()` function is a wrapper around Universal's `[renderModule](../api/platform-server/rendermodule)()` function which turns a client's requests into server-rendered HTML pages. It accepts an object with the following properties: | Properties | Details | | --- | --- | | `bootstrap` | The root `[NgModule](../api/core/ngmodule)` or `[NgModule](../api/core/ngmodule)` factory to use for bootstrapping the application when rendering on the server. For the example application, it is `AppServerModule`. It's the bridge between the Universal server-side renderer and the Angular application. | | `extraProviders` | This property is optional and lets you specify dependency providers that apply only when rendering the application on the server. Do this when your application needs information that can only be determined by the currently running server instance. | The `ngExpressEngine()` function returns a `Promise` callback that resolves to the rendered page. It's up to the engine to decide what to do with that page. This engine's `Promise` callback returns the rendered page to the web server, which then forwards it to the client in the HTTP response. > **NOTE**: These wrappers help hide the complexity of the `[renderModule](../api/platform-server/rendermodule)()` function. There are more wrappers for different backend technologies at the [Universal repository](https://github.com/angular/universal). > > ### Filtering request URLs > **NOTE**: The basic behavior described below is handled automatically when using the NgUniversal Express schematic. This is helpful when trying to understand the underlying behavior or replicate it without using the schematic. > > The web server must distinguish *app page requests* from other kinds of requests. It's not as simple as intercepting a request to the root address `/`. The browser could ask for one of the application routes such as `/dashboard`, `/heroes`, or `/detail:12`. In fact, if the application were only rendered by the server, *every* application link clicked would arrive at the server as a navigation URL intended for the router. Fortunately, application routes have something in common: their URLs lack file extensions. (Data requests also lack extensions but they can be recognized because they always begin with `/api`.) All static asset requests have a file extension (such as `main.js` or `/node_modules/zone.js/bundles/zone.umd.js`). Because you use routing, you can recognize the three types of requests and handle them differently. | Routing request types | Details | | --- | --- | | Data request | Request URL that begins `/api`. | | App navigation | Request URL with no file extension. | | Static asset | All other requests. | A Node.js Express server is a pipeline of middleware that filters and processes requests one after the other. You configure the Node.js Express server pipeline with calls to `server.get()` like this one for data requests. ``` // TODO: implement data requests securely server.get('/api/**', (req, res) => { res.status(404).send('data requests are not yet supported'); }); ``` > **NOTE**: This sample server doesn't handle data requests. > > The tutorial's "in-memory web API" module, a demo and development tool, intercepts all HTTP calls and simulates the behavior of a remote data server. In practice, you would remove that module and register your web API middleware on the server here. > > The following code filters for request URLs with no extensions and treats them as navigation requests. ``` // All regular routes use the Universal engine server.get('*', (req, res) => { res.render(indexHtml, { req, providers: [{ provide: APP_BASE_HREF, useValue: req.baseUrl }] }); }); ``` ### Serving static files safely A single `server.use()` treats all other URLs as requests for static assets such as JavaScript, image, and style files. To ensure that clients can only download the files that they are permitted to see, put all client-facing asset files in the `/dist` folder and only honor requests for files from the `/dist` folder. The following Node.js Express code routes all remaining requests to `/dist`, and returns a `404 - NOT FOUND` error if the file isn't found. ``` // Serve static files from /browser server.get('*.*', express.static(distFolder, { maxAge: '1y' })); ``` ### Using absolute URLs for HTTP (data) requests on the server The tutorial's `HeroService` and `HeroSearchService` delegate to the Angular `[HttpClient](../api/common/http/httpclient)` module to fetch application data. These services send requests to *relative* URLs such as `api/heroes`. In a server-side rendered app, HTTP URLs must be *absolute* (for example, `https://my-server.com/api/heroes`). This means that the URLs must be somehow converted to absolute when running on the server and be left relative when running in the browser. If you are using one of the `@nguniversal/*-engine` packages (such as `@nguniversal/express-engine`), this is taken care for you automatically. You don't need to do anything to make relative URLs work on the server. If, for some reason, you are not using an `@nguniversal/*-engine` package, you might need to handle it yourself. The recommended solution is to pass the full request URL to the `options` argument of [renderModule()](../api/platform-server/rendermodule) or [renderModuleFactory()](../api/platform-server/rendermodulefactory) (depending on what you use to render `AppServerModule` on the server). This option is the least intrusive as it does not require any changes to the application. Here, "request URL" refers to the URL of the request as a response to which the application is being rendered on the server. For example, if the client requested `https://my-server.com/dashboard` and you are rendering the application on the server to respond to that request, `options.url` should be set to `https://my-server.com/dashboard`. Now, on every HTTP request made as part of rendering the application on the server, Angular can correctly resolve the request URL to an absolute URL, using the provided `options.url`. ### Useful scripts | Scripts | Details | | --- | --- | | ``` npm run dev:ssr ``` | Similar to [`ng serve`](cli/serve), which offers live reload during development, but uses server-side rendering. The application runs in watch mode and refreshes the browser after every change. This command is slower than the actual `ng serve` command. | | ``` ng build && ng run app-name:server ``` | Builds both the server script and the application in production mode. Use this command when you want to build the project for deployment. | | ``` npm run serve:ssr ``` | Starts the server script for serving the application locally with server-side rendering. It uses the build artifacts created by `ng run build:ssr`, so make sure you have run that command as well. **NOTE**: `serve:ssr` is not intended to be used to serve your application in production, but only for testing the server-side rendered application locally. | | ``` npm run prerender ``` | Used to prerender an application's pages. Read more about prerendering [here](prerendering). | Last reviewed on Mon Feb 28 2022
programming_docs
angular Find out how much code you're testing Find out how much code you're testing ===================================== The Angular CLI can run unit tests and create code coverage reports. Code coverage reports show you any parts of your code base that might not be properly tested by your unit tests. > If you'd like to experiment with the application that this guide describes, run it in your browser or download and run it locally. > > To generate a coverage report run the following command in the root of your project. ``` ng test --no-watch --code-coverage ``` When the tests are complete, the command creates a new `/coverage` directory in the project. Open the `index.html` file to see a report with your source code and code coverage values. If you want to create code-coverage reports every time you test, set the following option in the Angular CLI configuration file, `angular.json`: ``` "test": { "options": { "codeCoverage": true } } ``` Code coverage enforcement ------------------------- The code coverage percentages let you estimate how much of your code is tested. If your team decides on a set minimum amount to be unit tested, enforce this minimum with the Angular CLI. For example, suppose you want the code base to have a minimum of 80% code coverage. To enable this, open the [Karma](https://karma-runner.github.io) test platform configuration file, `karma.conf.js`, and add the `check` property in the `coverageReporter:` key. ``` coverageReporter: { dir: require('path').join(__dirname, './coverage/<project-name>'), subdir: '.', reporters: [ { type: 'html' }, { type: 'text-summary' } ], check: { global: { statements: 80, branches: 80, functions: 80, lines: 80 } } } ``` > Read more about creating and fine tunning Karma configuration in the [testing guide](testing#configuration). > > The `check` property causes the tool to enforce a minimum of 80% code coverage when the unit tests are run in the project. Read more on coverage configuration options in the [karma coverage documentation](https://github.com/karma-runner/karma-coverage/blob/master/docs/configuration.md). Last reviewed on Tue Jan 17 2023 angular Content projection Content projection ================== This topic describes how to use content projection to create flexible, reusable components. > To view or download the example code used in this topic, see the live example. > > Content projection is a pattern in which you insert, or *project*, the content you want to use inside another component. For example, you could have a `Card` component that accepts content provided by another component. The following sections describe common implementations of content projection in Angular, including: | Content projection | Details | | --- | --- | | [Single-slot content projection](content-projection#single-slot) | With this type of content projection, a component accepts content from a single source. | | [Multi-slot content projection](content-projection#multi-slot) | In this scenario, a component accepts content from multiple sources. | | [Conditional content projection](content-projection#conditional) | Components that use conditional content projection render content only when specific conditions are met. | Single-slot content projection ------------------------------ The most basic form of content projection is *single-slot content projection*. Single-slot content projection refers to creating a component into which you can project one component. To create a component that uses single-slot content projection: 1. [Create a component](component-overview#creating-a-component). 2. In the template for your component, add an `[<ng-content>](../api/core/ng-content)` element where you want the projected content to appear. For example, the following component uses an `[<ng-content>](../api/core/ng-content)` element to display a message. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-zippy-basic', template: ` <h2>Single-slot content projection</h2> <ng-content></ng-content> ` }) export class ZippyBasicComponent {} ``` With the `[<ng-content>](../api/core/ng-content)` element in place, users of this component can now project their own message into the component. For example: ``` <app-zippy-basic> <p>Is content projection cool?</p> </app-zippy-basic> ``` > The `[<ng-content>](../api/core/ng-content)` element is a placeholder that does not create a real DOM element. Custom attributes applied to `[<ng-content>](../api/core/ng-content)` are ignored. > > Multi-slot content projection ----------------------------- A component can have multiple slots. Each slot can specify a CSS selector that determines which content goes into that slot. This pattern is referred to as *multi-slot content projection*. With this pattern, you must specify where you want the projected content to appear. You accomplish this task by using the `select` attribute of `[<ng-content>](../api/core/ng-content)`. To create a component that uses multi-slot content projection: 1. [Create a component](component-overview#creating-a-component). 2. In the template for your component, add an `[<ng-content>](../api/core/ng-content)` element where you want the projected content to appear. 3. Add a `select` attribute to the `[<ng-content>](../api/core/ng-content)` elements. Angular supports [selectors](https://developer.mozilla.org/docs/Web/CSS/CSS_Selectors) for any combination of tag name, attribute, CSS class, and the `:not` pseudo-class. For example, the following component uses two `[<ng-content>](../api/core/ng-content)` elements. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-zippy-multislot', template: ` <h2>Multi-slot content projection</h2> Default: <ng-content></ng-content> Question: <ng-content select="[question]"></ng-content> ` }) export class ZippyMultislotComponent {} ``` Content that uses the `question` attribute is projected into the `[<ng-content>](../api/core/ng-content)` element with the `select=[question]` attribute. ``` <app-zippy-multislot> <p question> Is content projection cool? </p> <p>Let's learn about content projection!</p> </app-zippy-multislot> ``` If your component includes an `[<ng-content>](../api/core/ng-content)` element without a `select` attribute, that instance receives all projected components that do not match any of the other `[<ng-content>](../api/core/ng-content)` elements. In the preceding example, only the second `[<ng-content>](../api/core/ng-content)` element defines a `select` attribute. As a result, the first `[<ng-content>](../api/core/ng-content)` element receives any other content projected into the component. Conditional content projection ------------------------------ If your component needs to *conditionally* render content, or render content multiple times, you should configure that component to accept an `[<ng-template>](../api/core/ng-template)` element that contains the content you want to conditionally render. Using an `[<ng-content>](../api/core/ng-content)` element in these cases is not recommended, because when the consumer of a component supplies the content, that content is *always* initialized, even if the component does not define an `[<ng-content>](../api/core/ng-content)` element or if that `[<ng-content>](../api/core/ng-content)` element is inside of an `[ngIf](../api/common/ngif)` statement. With an `[<ng-template>](../api/core/ng-template)` element, you can have your component explicitly render content based on any condition you want, as many times as you want. Angular will not initialize the content of an `[<ng-template>](../api/core/ng-template)` element until that element is explicitly rendered. The following steps demonstrate a typical implementation of conditional content projection using `[<ng-template>](../api/core/ng-template)`. 1. [Create a component](component-overview#creating-a-component). 2. In the component that accepts an `[<ng-template>](../api/core/ng-template)` element, use an `[<ng-container>](../api/core/ng-container)` element to render that template, such as: ``` <ng-container [ngTemplateOutlet]="content.templateRef"></ng-container> ``` This example uses the `[ngTemplateOutlet](../api/common/ngtemplateoutlet)` directive to render a given `[<ng-template>](../api/core/ng-template)` element, which you will define in a later step. You can apply an `[ngTemplateOutlet](../api/common/ngtemplateoutlet)` directive to any type of element. This example assigns the directive to an `[<ng-container>](../api/core/ng-container)` element because the component does not need to render a real DOM element. 3. Wrap the `[<ng-container>](../api/core/ng-container)` element in another element, such as a `div` element, and apply your conditional logic. ``` <div *ngIf="expanded" [id]="contentId"> <ng-container [ngTemplateOutlet]="content.templateRef"></ng-container> </div> ``` 4. In the template where you want to project content, wrap the projected content in an `[<ng-template>](../api/core/ng-template)` element, such as: ``` <ng-template appExampleZippyContent> It depends on what you do with it. </ng-template> ``` The `[<ng-template>](../api/core/ng-template)` element defines a block of content that a component can render based on its own logic. A component can get a reference to this template content, or `[TemplateRef](../api/core/templateref)`, by using either the `@[ContentChild](../api/core/contentchild)` or `@[ContentChildren](../api/core/contentchildren)` decorators. The preceding example creates a custom directive, `appExampleZippyContent`, as an API to mark the `[<ng-template>](../api/core/ng-template)` for the component's content. With the `[TemplateRef](../api/core/templateref)`, the component can render the referenced content by using either the `[ngTemplateOutlet](../api/common/ngtemplateoutlet)` directive, or with the `[ViewContainerRef](../api/core/viewcontainerref)` method `createEmbeddedView()`. 5. [Create an attribute directive](attribute-directives#building-an-attribute-directive) with a selector that matches the custom attribute for your template. In this directive, inject a `[TemplateRef](../api/core/templateref)` instance. ``` @Directive({ selector: '[appExampleZippyContent]' }) export class ZippyContentDirective { constructor(public templateRef: TemplateRef<unknown>) {} } ``` In the previous step, you added an `[<ng-template>](../api/core/ng-template)` element with a custom attribute, `appExampleZippyContent`. This code provides the logic that Angular will use when it encounters that custom attribute. In this case, that logic instructs Angular to instantiate a template reference. 6. In the component you want to project content into, use `@[ContentChild](../api/core/contentchild)` to get the template of the projected content. ``` @ContentChild(ZippyContentDirective) content!: ZippyContentDirective; ``` Prior to this step, your application has a component that instantiates a template when certain conditions are met. You've also created a directive that provides a reference to that template. In this last step, the `@[ContentChild](../api/core/contentchild)` decorator instructs Angular to instantiate the template in the designated component. > In the case of multi-slot content projection, use `@[ContentChildren](../api/core/contentchildren)` to get a `[QueryList](../api/core/querylist)` of projected elements. > > Projecting content in more complex environments ----------------------------------------------- As described in [Multi-slot Content Projection](content-projection#multi-slot), you typically use either an attribute, element, CSS Class, or some combination of all three to identify where to project your content. For example, in the following HTML template, a paragraph tag uses a custom attribute, `question`, to project content into the `app-zippy-multislot` component. ``` <app-zippy-multislot> <p question> Is content projection cool? </p> <p>Let's learn about content projection!</p> </app-zippy-multislot> ``` In some cases, you might want to project content as a different element. For example, the content you want to project might be a child of another element. Accomplish this with the `ngProjectAs` attribute. For instance, consider the following HTML snippet: ``` <ng-container ngProjectAs="[question]"> <p>Is content projection cool?</p> </ng-container> ``` This example uses an `[<ng-container>](../api/core/ng-container)` attribute to simulate projecting a component into a more complex structure. The `ng-container` element is a logical construct that is used to group other DOM elements; however, the `ng-container` itself is not rendered in the DOM tree. In this example, the content we want to project resides inside another element. To project this content as intended, the template uses the `ngProjectAs` attribute. With `ngProjectAs`, the entire `[<ng-container>](../api/core/ng-container)` element is projected into a component using the `[question]` selector. Last reviewed on Mon Feb 28 2022 angular Angular components overview Angular components overview =========================== Components are the main building block for Angular applications. Each component consists of: * An HTML template that declares what renders on the page * A TypeScript class that defines behavior * A CSS selector that defines how the component is used in a template * Optionally, CSS styles applied to the template This topic describes how to create and configure an Angular component. > To view or download the example code used in this topic, see the live example. > > Prerequisites ------------- To create a component, verify that you have met the following prerequisites: 1. [Install the Angular CLI.](setup-local#install-the-angular-cli) 2. [Create an Angular workspace](setup-local#create-a-workspace-and-initial-application) with initial application. If you don't have a project, create one using `ng new <project-name>`, where `<project-name>` is the name of your Angular application. Creating a component -------------------- The best way to create a component is with the Angular CLI. You can also create a component manually. ### Creating a component using the Angular CLI To create a component using the Angular CLI: 1. From a terminal window, navigate to the directory containing your application. 2. Run the `ng generate component <component-name>` command, where `<component-name>` is the name of your new component. By default, this command creates the following: * A directory named after the component * A component file, `<component-name>.component.ts` * A template file, `<component-name>.component.html` * A CSS file, `<component-name>.component.css` * A testing specification file, `<component-name>.component.spec.ts` Where `<component-name>` is the name of your component. > You can change how `ng generate component` creates new components. For more information, see [ng generate component](cli/generate#component-command) in the Angular CLI documentation. > > ### Creating a component manually Although the Angular CLI is the best way to create an Angular component, you can also create a component manually. This section describes how to create the core component file within an existing Angular project. To create a new component manually: 1. Navigate to your Angular project directory. 2. Create a new file, `<component-name>.component.ts`. 3. At the top of the file, add the following import statement. ``` import { Component } from '@angular/core'; ``` 4. After the `import` statement, add a `@[Component](../api/core/component)` decorator. ``` @Component({ }) ``` 5. Choose a CSS selector for the component. ``` @Component({ selector: 'app-component-overview', }) ``` For more information on choosing a selector, see [Specifying a component's selector](component-overview#specifying-a-components-css-selector). 6. Define the HTML template that the component uses to display information. In most cases, this template is a separate HTML file. ``` @Component({ selector: 'app-component-overview', templateUrl: './component-overview.component.html', }) ``` For more information on defining a component's template, see [Defining a component's template](component-overview#defining-a-components-template). 7. Select the styles for the component's template. In most cases, you define the styles for your component's template in a separate file. ``` @Component({ selector: 'app-component-overview', templateUrl: './component-overview.component.html', styleUrls: ['./component-overview.component.css'] }) ``` 8. Add a `class` statement that includes the code for the component. ``` export class ComponentOverviewComponent { } ``` Specifying a component's CSS selector ------------------------------------- Every component requires a CSS *selector*. A selector instructs Angular to instantiate this component wherever it finds the corresponding tag in template HTML. For example, consider a component `hello-world.component.ts` that defines its selector as `app-hello-world`. This selector instructs Angular to instantiate this component any time the tag `<app-hello-world>` appears in a template. Specify a component's selector by adding a `selector` statement to the `@[Component](../api/core/component)` decorator. ``` @Component({ selector: 'app-component-overview', }) ``` Defining a component's template ------------------------------- A template is a block of HTML that tells Angular how to render the component in your application. Define a template for your component in one of two ways: by referencing an external file, or directly within the component. To define a template as an external file, add a `templateUrl` property to the `@[Component](../api/core/component)` decorator. ``` @Component({ selector: 'app-component-overview', templateUrl: './component-overview.component.html', }) ``` To define a template within the component, add a `template` property to the `@[Component](../api/core/component)` decorator that contains the HTML you want to use. ``` @Component({ selector: 'app-component-overview', template: '<h1>Hello World!</h1>', }) ``` If you want your template to span multiple lines, use backticks (```). For example: ``` @Component({ selector: 'app-component-overview', template: ` <h1>Hello World!</h1> <p>This template definition spans multiple lines.</p> ` }) ``` > An Angular component requires a template defined using `template` or `templateUrl`. You cannot have both statements in a component. > > Declaring a component's styles ------------------------------ Declare component styles used for its template in one of two ways: By referencing an external file, or directly within the component. To declare the styles for a component in a separate file, add a `styleUrls` property to the `@[Component](../api/core/component)` decorator. ``` @Component({ selector: 'app-component-overview', templateUrl: './component-overview.component.html', styleUrls: ['./component-overview.component.css'] }) ``` To declare the styles within the component, add a `styles` property to the `@[Component](../api/core/component)` decorator that contains the styles you want to use. ``` @Component({ selector: 'app-component-overview', template: '<h1>Hello World!</h1>', styles: ['h1 { font-weight: normal; }'] }) ``` The `styles` property takes an array of strings that contain the CSS rule declarations. Next steps ---------- * For an architectural overview of components, see [Introduction to components and templates](architecture-components) * For additional options to use when creating a component, see [Component](../api/core/component) in the API Reference * For more information on styling components, see [Component styles](component-styles) * For more information on templates, see [Template syntax](template-syntax) Last reviewed on Mon Feb 28 2022
programming_docs
angular Refer to locales by ID Refer to locales by ID ====================== Angular uses the Unicode *locale identifier* (Unicode locale ID) to find the correct locale data for internationalization of text strings. * A locale ID conforms to the [Unicode Common Locale Data Repository (CLDR) core specification](https://cldr.unicode.org/development/core-specification "Core Specification | Unicode CLDR Project"). For more information about locale IDs, see [Unicode Language and Locale Identifiers](https://cldr.unicode.org/development/core-specification#h.vgyyng33o798 "Unicode Language and Locale Identifiers - Core Specification | Unicode CLDR Project"). * CLDR and Angular use [BCP 47 tags](https://www.rfc-editor.org/info/bcp47 "BCP 47 | RFC Editor") as the base for the locale ID A locale ID specifies the language, country, and an optional code for further variants or subdivisions. A locale ID consists of the language identifier, a hyphen (`-`) character, and the locale extension. ``` {language_id}-{locale_extension} ``` > To accurately translate your Angular project, you must decide which languages and locales you are targeting for internationalization. > > Many countries share the same language, but differ in usage. The differences include grammar, punctuation, formats for currency, decimal numbers, dates, and so on. > > For the examples in this guide, use the following languages and locales. | Language | Locale | Unicode locale ID | | --- | --- | --- | | English | Canada | `en-CA` | | English | United States of America | `en-US` | | French | Canada | `fr-CA` | | French | France | `fr-FR` | The [Angular repository](https://github.com/angular/angular/tree/main/packages/common/locales "angular/packages/common/locales | angular/angular | GitHub") includes common locales. For a list of language codes, see [ISO 639-2](https://www.loc.gov/standards/iso639-2 "ISO 639-2 Registration Authority | Library of Congress"). Set the source locale ID ------------------------ Use the Angular CLI to set the source language in which you are writing the component template and code. By default, Angular uses `en-US` as the source locale of your project. To change the source locale of your project for the build, complete the following actions. 1. Open the [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file. 2. Change the source locale in the `sourceLocale` field. What's next ----------- * [Format data based on locale](i18n-common-format-data-locale "Format data based on locale | Angular") Last reviewed on Thu Oct 28 2021 angular Creating an injectable service Creating an injectable service ============================== Service is a broad category encompassing any value, function, or feature that an application needs. A service is typically a class with a narrow, well-defined purpose. A component is one type of class that can use DI. Angular distinguishes components from services to increase modularity and reusability. By separating a component's view-related features from other kinds of processing, you can make your component classes lean and efficient. Ideally, a component's job is to enable the user experience and nothing more. A component should present properties and methods for data binding, to mediate between the view (rendered by the template) and the application logic (which often includes some notion of a model). A component can delegate certain tasks to services, such as fetching data from the server, validating user input, or logging directly to the console. By defining such processing tasks in an injectable service class, you make those tasks available to any component. You can also make your application more adaptable by injecting different providers of the same kind of service, as appropriate in different circumstances. Angular does not enforce these principles. Angular helps you follow these principles by making it easy to factor your application logic into services and make those services available to components through DI. Service examples ---------------- Here's an example of a service class that logs to the browser console. ``` export class Logger { log(msg: any) { console.log(msg); } error(msg: any) { console.error(msg); } warn(msg: any) { console.warn(msg); } } ``` Services can depend on other services. For example, here's a `HeroService` that depends on the `Logger` service, and also uses `BackendService` to get heroes. That service in turn might depend on the `[HttpClient](../api/common/http/httpclient)` service to fetch heroes asynchronously from a server. ``` export class HeroService { private heroes: Hero[] = []; constructor( private backend: BackendService, private logger: Logger) { } getHeroes() { this.backend.getAll(Hero).then( (heroes: Hero[]) => { this.logger.log(`Fetched ${heroes.length} heroes.`); this.heroes.push(...heroes); // fill cache }); return this.heroes; } } ``` Creating an injectable service ------------------------------ Angular CLI provides a command to create a new service. In the following example, you add a new service to your application, which was created earlier with the `ng new` command. To generate a new `HeroService` class in the `src/app/heroes` folder, follow these steps: 1. Run this [Angular CLI](cli) command: ``` ng generate service heroes/hero ``` This command creates the following default `HeroService`. ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root', }) export class HeroService { } ``` The `@[Injectable](../api/core/injectable)()` decorator specifies that Angular can use this class in the DI system. The metadata, `providedIn: 'root'`, means that the `HeroService` is visible throughout the application. 2. Add a `getHeroes()` method that returns the heroes from `mock.heroes.ts` to get the hero mock data: ``` import { Injectable } from '@angular/core'; import { HEROES } from './mock-heroes'; @Injectable({ // declares that this service should be created // by the root application injector. providedIn: 'root', }) export class HeroService { getHeroes() { return HEROES; } } ``` For clarity and maintainability, it is recommended that you define components and services in separate files. Injecting services ------------------ To inject a service as a dependency into a component, you can use component's `constructor()` and supply a constructor argument with the dependency type. The following example specifies the `HeroService` in the `HeroListComponent` constructor. The type of the `heroService` is `HeroService`. Angular recognizes the `HeroService` as a dependency, since that class was previously annotated with the `@[Injectable](../api/core/injectable)` decorator. ``` constructor(heroService: HeroService) ``` Injecting services in other services ------------------------------------ When a service depends on another service, follow the same pattern as injecting into a component. In the following example `HeroService` depends on a `Logger` service to report its activities. First, import the `Logger` service. Next, inject the `Logger` service in the `HeroService` `constructor()` by specifying `private logger: Logger`. Here, the `constructor()` specifies a type of `Logger` and stores the instance of `Logger` in a private field called `logger`. The following code tabs feature the `Logger` service and two versions of `HeroService`. The first version of `HeroService` does not depend on the `Logger` service. The revised second version does depend on `Logger` service. ``` import { Injectable } from '@angular/core'; import { HEROES } from './mock-heroes'; import { Logger } from '../logger.service'; @Injectable({ providedIn: 'root', }) export class HeroService { constructor(private logger: Logger) { } getHeroes() { this.logger.log('Getting heroes ...'); return HEROES; } } ``` ``` import { Injectable } from '@angular/core'; import { HEROES } from './mock-heroes'; @Injectable({ providedIn: 'root', }) export class HeroService { getHeroes() { return HEROES; } } ``` ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class Logger { logs: string[] = []; // capture logs for testing log(message: string) { this.logs.push(message); console.log(message); } } ``` In this example, the `getHeroes()` method uses the `Logger` service by logging a message when fetching heroes. What's next ----------- * [How to configure dependencies in DI](dependency-injection-providers) * [How to use `InjectionTokens` to provide and inject values other than services/classes](dependency-injection-providers#configuring-dependency-providers) * [Dependency Injection in Action](dependency-injection-in-action) Last reviewed on Tue Aug 02 2022 angular Localized documentation Localized documentation ======================= This topic lists localized versions of the Angular documentation. * [Español](http://docs.angular.lat) * [简体中文版](https://angular.cn) * [正體中文版](https://angular.tw) * [日本語版](https://angular.jp) * [한국어](https://angular.kr) * [Ελληνικά](https://angular-gr.web.app) For information on localizing Angular documentation, see [Angular localization guidelines](localizing-angular). Last reviewed on Mon Feb 28 2022 angular SVG as templates SVG as templates ================ You can use SVG files as templates in your Angular applications. When you use an SVG as the template, you are able to use directives and bindings just like with HTML templates. Use these features to dynamically generate interactive graphics. > See the for a working example containing the code snippets in this guide. > > SVG syntax example ------------------ The following example shows the syntax for using an SVG as a template. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-svg', templateUrl: './svg.component.svg', styleUrls: ['./svg.component.css'] }) export class SvgComponent { fillColor = 'rgb(255, 0, 0)'; changeColor() { const r = Math.floor(Math.random() * 256); const g = Math.floor(Math.random() * 256); const b = Math.floor(Math.random() * 256); this.fillColor = `rgb(${r}, ${g}, ${b})`; } } ``` To see property and event binding in action, add the following code to your `svg.component.svg` file: ``` <svg> <g> <rect x="0" y="0" width="100" height="100" [attr.fill]="fillColor" (click)="changeColor()" /> <text x="120" y="50">click the rectangle to change the fill color</text> </g> </svg> ``` The example given uses a `click()` event binding and the property binding syntax (`[attr.fill]="fillColor"`). Last reviewed on Mon Feb 28 2022 angular Component interaction Component interaction ===================== This cookbook contains recipes for common component communication scenarios in which two or more components share information. **See the** . Pass data from parent to child with input binding ------------------------------------------------- `HeroChildComponent` has two ***input properties***, typically adorned with [@Input() decorator](inputs-outputs#input). ``` import { Component, Input } from '@angular/core'; import { Hero } from './hero'; @Component({ selector: 'app-hero-child', template: ` <h3>{{hero.name}} says:</h3> <p>I, {{hero.name}}, am at your service, {{masterName}}.</p> ` }) export class HeroChildComponent { @Input() hero!: Hero; @Input('master') masterName = ''; } ``` The second `@[Input](../api/core/input)` aliases the child component property name `masterName` as `'master'`. The `HeroParentComponent` nests the child `HeroChildComponent` inside an `*[ngFor](../api/common/ngfor)` repeater, binding its `master` string property to the child's `master` alias, and each iteration's `hero` instance to the child's `hero` property. ``` import { Component } from '@angular/core'; import { HEROES } from './hero'; @Component({ selector: 'app-hero-parent', template: ` <h2>{{master}} controls {{heroes.length}} heroes</h2> <app-hero-child *ngFor="let hero of heroes" [hero]="hero" [master]="master"> </app-hero-child> ` }) export class HeroParentComponent { heroes = HEROES; master = 'Master'; } ``` The running application displays three heroes: ### Test it for Pass data from parent to child with input binding E2E test that all children were instantiated and displayed as expected: ``` // ... const heroNames = ['Dr. IQ', 'Magneta', 'Bombasto']; const masterName = 'Master'; it('should pass properties to children properly', async () => { const parent = element(by.tagName('app-hero-parent')); const heroes = parent.all(by.tagName('app-hero-child')); for (let i = 0; i < heroNames.length; i++) { const childTitle = await heroes.get(i).element(by.tagName('h3')).getText(); const childDetail = await heroes.get(i).element(by.tagName('p')).getText(); expect(childTitle).toEqual(heroNames[i] + ' says:'); expect(childDetail).toContain(masterName); } }); // ... ``` [Back to top](component-interaction#top) Intercept input property changes with a setter ---------------------------------------------- Use an input property setter to intercept and act upon a value from the parent. The setter of the `name` input property in the child `NameChildComponent` trims the whitespace from a name and replaces an empty value with default text. ``` import { Component, Input } from '@angular/core'; @Component({ selector: 'app-name-child', template: '<h3>"{{name}}"</h3>' }) export class NameChildComponent { @Input() get name(): string { return this._name; } set name(name: string) { this._name = (name && name.trim()) || '<no name set>'; } private _name = ''; } ``` Here's the `NameParentComponent` demonstrating name variations including a name with all spaces: ``` import { Component } from '@angular/core'; @Component({ selector: 'app-name-parent', template: ` <h2>Master controls {{names.length}} names</h2> <app-name-child *ngFor="let name of names" [name]="name"></app-name-child> ` }) export class NameParentComponent { // Displays 'Dr. IQ', '<no name set>', 'Bombasto' names = ['Dr. IQ', ' ', ' Bombasto ']; } ``` ### Test it for Intercept input property changes with a setter E2E tests of input property setter with empty and non-empty names: ``` // ... it('should display trimmed, non-empty names', async () => { const nonEmptyNameIndex = 0; const nonEmptyName = '"Dr. IQ"'; const parent = element(by.tagName('app-name-parent')); const hero = parent.all(by.tagName('app-name-child')).get(nonEmptyNameIndex); const displayName = await hero.element(by.tagName('h3')).getText(); expect(displayName).toEqual(nonEmptyName); }); it('should replace empty name with default name', async () => { const emptyNameIndex = 1; const defaultName = '"<no name set>"'; const parent = element(by.tagName('app-name-parent')); const hero = parent.all(by.tagName('app-name-child')).get(emptyNameIndex); const displayName = await hero.element(by.tagName('h3')).getText(); expect(displayName).toEqual(defaultName); }); // ... ``` [Back to top](component-interaction#top) Intercept input property changes with `ngOnChanges()` ----------------------------------------------------- Detect and act upon changes to input property values with the `ngOnChanges()` method of the `[OnChanges](../api/core/onchanges)` lifecycle hook interface. > You might prefer this approach to the property setter when watching multiple, interacting input properties. > > Learn about `ngOnChanges()` in the [Lifecycle Hooks](lifecycle-hooks) chapter. > > This `VersionChildComponent` detects changes to the `major` and `minor` input properties and composes a log message reporting these changes: ``` import { Component, Input, OnChanges, SimpleChanges } from '@angular/core'; @Component({ selector: 'app-version-child', template: ` <h3>Version {{major}}.{{minor}}</h3> <h4>Change log:</h4> <ul> <li *ngFor="let change of changeLog">{{change}}</li> </ul> ` }) export class VersionChildComponent implements OnChanges { @Input() major = 0; @Input() minor = 0; changeLog: string[] = []; ngOnChanges(changes: SimpleChanges) { const log: string[] = []; for (const propName in changes) { const changedProp = changes[propName]; const to = JSON.stringify(changedProp.currentValue); if (changedProp.isFirstChange()) { log.push(`Initial value of ${propName} set to ${to}`); } else { const from = JSON.stringify(changedProp.previousValue); log.push(`${propName} changed from ${from} to ${to}`); } } this.changeLog.push(log.join(', ')); } } ``` The `VersionParentComponent` supplies the `minor` and `major` values and binds buttons to methods that change them. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-version-parent', template: ` <h2>Source code version</h2> <button type="button" (click)="newMinor()">New minor version</button> <button type="button" (click)="newMajor()">New major version</button> <app-version-child [major]="major" [minor]="minor"></app-version-child> ` }) export class VersionParentComponent { major = 1; minor = 23; newMinor() { this.minor++; } newMajor() { this.major++; this.minor = 0; } } ``` Here's the output of a button-pushing sequence: ### Test it for Intercept input property changes with `ngOnChanges()` Test that ***both*** input properties are set initially and that button clicks trigger the expected `ngOnChanges` calls and values: ``` // ... // Test must all execute in this exact order it('should set expected initial values', async () => { const actual = await getActual(); const initialLabel = 'Version 1.23'; const initialLog = 'Initial value of major set to 1, Initial value of minor set to 23'; expect(actual.label).toBe(initialLabel); expect(actual.count).toBe(1); expect(await actual.logs.get(0).getText()).toBe(initialLog); }); it("should set expected values after clicking 'Minor' twice", async () => { const repoTag = element(by.tagName('app-version-parent')); const newMinorButton = repoTag.all(by.tagName('button')).get(0); await newMinorButton.click(); await newMinorButton.click(); const actual = await getActual(); const labelAfter2Minor = 'Version 1.25'; const logAfter2Minor = 'minor changed from 24 to 25'; expect(actual.label).toBe(labelAfter2Minor); expect(actual.count).toBe(3); expect(await actual.logs.get(2).getText()).toBe(logAfter2Minor); }); it("should set expected values after clicking 'Major' once", async () => { const repoTag = element(by.tagName('app-version-parent')); const newMajorButton = repoTag.all(by.tagName('button')).get(1); await newMajorButton.click(); const actual = await getActual(); const labelAfterMajor = 'Version 2.0'; const logAfterMajor = 'major changed from 1 to 2, minor changed from 23 to 0'; expect(actual.label).toBe(labelAfterMajor); expect(actual.count).toBe(2); expect(await actual.logs.get(1).getText()).toBe(logAfterMajor); }); async function getActual() { const versionTag = element(by.tagName('app-version-child')); const label = await versionTag.element(by.tagName('h3')).getText(); const ul = versionTag.element((by.tagName('ul'))); const logs = ul.all(by.tagName('li')); return { label, logs, count: await logs.count(), }; } // ... ``` [Back to top](component-interaction#top) Parent listens for child event ------------------------------ The child component exposes an `[EventEmitter](../api/core/eventemitter)` property with which it `emits` events when something happens. The parent binds to that event property and reacts to those events. The child's `[EventEmitter](../api/core/eventemitter)` property is an ***output property***, typically adorned with an [@Output() decorator](inputs-outputs#output) as seen in this `VoterComponent`: ``` import { Component, EventEmitter, Input, Output } from '@angular/core'; @Component({ selector: 'app-voter', template: ` <h4>{{name}}</h4> <button type="button" (click)="vote(true)" [disabled]="didVote">Agree</button> <button type="button" (click)="vote(false)" [disabled]="didVote">Disagree</button> ` }) export class VoterComponent { @Input() name = ''; @Output() voted = new EventEmitter<boolean>(); didVote = false; vote(agreed: boolean) { this.voted.emit(agreed); this.didVote = true; } } ``` Clicking a button triggers emission of a `true` or `false`, the boolean *payload*. The parent `VoteTakerComponent` binds an event handler called `onVoted()` that responds to the child event payload `$event` and updates a counter. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-vote-taker', template: ` <h2>Should mankind colonize the Universe?</h2> <h3>Agree: {{agreed}}, Disagree: {{disagreed}}</h3> <app-voter *ngFor="let voter of voters" [name]="voter" (voted)="onVoted($event)"> </app-voter> ` }) export class VoteTakerComponent { agreed = 0; disagreed = 0; voters = ['Dr. IQ', 'Celeritas', 'Bombasto']; onVoted(agreed: boolean) { if (agreed) { this.agreed++; } else { this.disagreed++; } } } ``` The framework passes the event argument —represented by `$event`— to the handler method, and the method processes it: ### Test it for Parent listens for child event Test that clicking the *Agree* and *Disagree* buttons update the appropriate counters: ``` // ... it('should not emit the event initially', async () => { const voteLabel = element(by.tagName('app-vote-taker')).element(by.tagName('h3')); expect(await voteLabel.getText()).toBe('Agree: 0, Disagree: 0'); }); it('should process Agree vote', async () => { const voteLabel = element(by.tagName('app-vote-taker')).element(by.tagName('h3')); const agreeButton1 = element.all(by.tagName('app-voter')).get(0) .all(by.tagName('button')).get(0); await agreeButton1.click(); expect(await voteLabel.getText()).toBe('Agree: 1, Disagree: 0'); }); it('should process Disagree vote', async () => { const voteLabel = element(by.tagName('app-vote-taker')).element(by.tagName('h3')); const agreeButton1 = element.all(by.tagName('app-voter')).get(1) .all(by.tagName('button')).get(1); await agreeButton1.click(); expect(await voteLabel.getText()).toBe('Agree: 0, Disagree: 1'); }); // ... ``` [Back to top](component-interaction#top) Parent interacts with child using `local variable` -------------------------------------------------- A parent component cannot use data binding to read child properties or invoke child methods. Do both by creating a template reference variable for the child element and then reference that variable *within the parent template* as seen in the following example. The following is a child `CountdownTimerComponent` that repeatedly counts down to zero and launches a rocket. The `start` and `stop` methods control the clock and a countdown status message displays in its own template. ``` import { Component, OnDestroy } from '@angular/core'; @Component({ selector: 'app-countdown-timer', template: '<p>{{message}}</p>' }) export class CountdownTimerComponent implements OnDestroy { intervalId = 0; message = ''; seconds = 11; ngOnDestroy() { this.clearTimer(); } start() { this.countDown(); } stop() { this.clearTimer(); this.message = `Holding at T-${this.seconds} seconds`; } private clearTimer() { clearInterval(this.intervalId); } private countDown() { this.clearTimer(); this.intervalId = window.setInterval(() => { this.seconds -= 1; if (this.seconds === 0) { this.message = 'Blast off!'; } else { if (this.seconds < 0) { this.seconds = 10; } // reset this.message = `T-${this.seconds} seconds and counting`; } }, 1000); } } ``` The `CountdownLocalVarParentComponent` that hosts the timer component is as follows: ``` import { Component } from '@angular/core'; import { CountdownTimerComponent } from './countdown-timer.component'; @Component({ selector: 'app-countdown-parent-lv', template: ` <h3>Countdown to Liftoff (via local variable)</h3> <button type="button" (click)="timer.start()">Start</button> <button type="button" (click)="timer.stop()">Stop</button> <div class="seconds">{{timer.seconds}}</div> <app-countdown-timer #timer></app-countdown-timer> `, styleUrls: ['../assets/demo.css'] }) export class CountdownLocalVarParentComponent { } ``` The parent component cannot data bind to the child's `start` and `stop` methods nor to its `seconds` property. Place a local variable, `#timer`, on the tag `<app-countdown-timer>` representing the child component. That gives you a reference to the child component and the ability to access *any of its properties or methods* from within the parent template. This example wires parent buttons to the child's `start` and `stop` and uses interpolation to display the child's `seconds` property. Here, the parent and child are working together. ### Test it for Parent interacts with child using `local variable` Test that the seconds displayed in the parent template match the seconds displayed in the child's status message. Test also that clicking the *Stop* button pauses the countdown timer: ``` // ... // The tests trigger periodic asynchronous operations (via `setInterval()`), which will prevent // the app from stabilizing. See https://angular.io/api/core/ApplicationRef#is-stable-examples // for more details. // To allow the tests to complete, we will disable automatically waiting for the Angular app to // stabilize. beforeEach(() => browser.waitForAngularEnabled(false)); afterEach(() => browser.waitForAngularEnabled(true)); it('timer and parent seconds should match', async () => { const parent = element(by.tagName(parentTag)); const startButton = parent.element(by.buttonText('Start')); const seconds = parent.element(by.className('seconds')); const timer = parent.element(by.tagName('app-countdown-timer')); await startButton.click(); // Wait for `<app-countdown-timer>` to be populated with any text. await browser.wait(() => timer.getText(), 2000); expect(await timer.getText()).toContain(await seconds.getText()); }); it('should stop the countdown', async () => { const parent = element(by.tagName(parentTag)); const startButton = parent.element(by.buttonText('Start')); const stopButton = parent.element(by.buttonText('Stop')); const timer = parent.element(by.tagName('app-countdown-timer')); await startButton.click(); expect(await timer.getText()).not.toContain('Holding'); await stopButton.click(); expect(await timer.getText()).toContain('Holding'); }); // ... ``` [Back to top](component-interaction#top) Parent calls an `@[ViewChild](../api/core/viewchild)()` ------------------------------------------------------- The *local variable* approach is straightforward. But it is limited because the parent-child wiring must be done entirely within the parent template. The parent component *itself* has no access to the child. You can't use the *local variable* technique if the parent component's *class* relies on the child component's *class*. The parent-child relationship of the components is not established within each component's respective *class* with the *local variable* technique. Because the *class* instances are not connected to one another, the parent *class* cannot access the child *class* properties and methods. When the parent component *class* requires that kind of access, ***inject*** the child component into the parent as a *ViewChild*. The following example illustrates this technique with the same [Countdown Timer](component-interaction#countdown-timer-example) example. Neither its appearance nor its behavior changes. The child [CountdownTimerComponent](component-interaction#countdown-timer-example) is the same as well. > The switch from the *local variable* to the *ViewChild* technique is solely for the purpose of demonstration. > > Here is the parent, `CountdownViewChildParentComponent`: ``` import { AfterViewInit, ViewChild } from '@angular/core'; import { Component } from '@angular/core'; import { CountdownTimerComponent } from './countdown-timer.component'; @Component({ selector: 'app-countdown-parent-vc', template: ` <h3>Countdown to Liftoff (via ViewChild)</h3> <button type="button" (click)="start()">Start</button> <button type="button" (click)="stop()">Stop</button> <div class="seconds">{{ seconds() }}</div> <app-countdown-timer></app-countdown-timer> `, styleUrls: ['../assets/demo.css'] }) export class CountdownViewChildParentComponent implements AfterViewInit { @ViewChild(CountdownTimerComponent) private timerComponent!: CountdownTimerComponent; seconds() { return 0; } ngAfterViewInit() { // Redefine `seconds()` to get from the `CountdownTimerComponent.seconds` ... // but wait a tick first to avoid one-time devMode // unidirectional-data-flow-violation error setTimeout(() => this.seconds = () => this.timerComponent.seconds, 0); } start() { this.timerComponent.start(); } stop() { this.timerComponent.stop(); } } ``` It takes a bit more work to get the child view into the parent component *class*. First, you have to import references to the `[ViewChild](../api/core/viewchild)` decorator and the `[AfterViewInit](../api/core/afterviewinit)` lifecycle hook. Next, inject the child `CountdownTimerComponent` into the private `timerComponent` property using the `@[ViewChild](../api/core/viewchild)` property decoration. The `#timer` local variable is gone from the component metadata. Instead, bind the buttons to the parent component's own `start` and `stop` methods and present the ticking seconds in an interpolation around the parent component's `seconds` method. These methods access the injected timer component directly. The `ngAfterViewInit()` lifecycle hook is an important wrinkle. The timer component isn't available until *after* Angular displays the parent view. So it displays `0` seconds initially. Then Angular calls the `ngAfterViewInit` lifecycle hook at which time it is *too late* to update the parent view's display of the countdown seconds. Angular's unidirectional data flow rule prevents updating the parent view's in the same cycle. The application must *wait one turn* before it can display the seconds. Use `setTimeout()` to wait one tick and then revise the `seconds()` method so that it takes future values from the timer component. ### Test it for Parent calls an `@[ViewChild](../api/core/viewchild)()` Use [the same countdown timer tests](component-interaction#countdown-tests) as before. [Back to top](component-interaction#top) Parent and children communicate using a service ----------------------------------------------- A parent component and its children share a service whose interface enables bidirectional communication *within the family*. The scope of the service instance is the parent component and its children. Components outside this component subtree have no access to the service or their communications. This `MissionService` connects the `MissionControlComponent` to multiple `AstronautComponent` children. ``` import { Injectable } from '@angular/core'; import { Subject } from 'rxjs'; @Injectable() export class MissionService { // Observable string sources private missionAnnouncedSource = new Subject<string>(); private missionConfirmedSource = new Subject<string>(); // Observable string streams missionAnnounced$ = this.missionAnnouncedSource.asObservable(); missionConfirmed$ = this.missionConfirmedSource.asObservable(); // Service message commands announceMission(mission: string) { this.missionAnnouncedSource.next(mission); } confirmMission(astronaut: string) { this.missionConfirmedSource.next(astronaut); } } ``` The `MissionControlComponent` both provides the instance of the service that it shares with its children (through the `providers` metadata array) and injects that instance into itself through its constructor: ``` import { Component } from '@angular/core'; import { MissionService } from './mission.service'; @Component({ selector: 'app-mission-control', template: ` <h2>Mission Control</h2> <button type="button" (click)="announce()">Announce mission</button> <app-astronaut *ngFor="let astronaut of astronauts" [astronaut]="astronaut"> </app-astronaut> <h3>History</h3> <ul> <li *ngFor="let event of history">{{event}}</li> </ul> `, providers: [MissionService] }) export class MissionControlComponent { astronauts = ['Lovell', 'Swigert', 'Haise']; history: string[] = []; missions = ['Fly to the moon!', 'Fly to mars!', 'Fly to Vegas!']; nextMission = 0; constructor(private missionService: MissionService) { missionService.missionConfirmed$.subscribe( astronaut => { this.history.push(`${astronaut} confirmed the mission`); }); } announce() { const mission = this.missions[this.nextMission++]; this.missionService.announceMission(mission); this.history.push(`Mission "${mission}" announced`); if (this.nextMission >= this.missions.length) { this.nextMission = 0; } } } ``` The `AstronautComponent` also injects the service in its constructor. Each `AstronautComponent` is a child of the `MissionControlComponent` and therefore receives its parent's service instance: ``` import { Component, Input, OnDestroy } from '@angular/core'; import { MissionService } from './mission.service'; import { Subscription } from 'rxjs'; @Component({ selector: 'app-astronaut', template: ` <p> {{astronaut}}: <strong>{{mission}}</strong> <button type="button" (click)="confirm()" [disabled]="!announced || confirmed"> Confirm </button> </p> ` }) export class AstronautComponent implements OnDestroy { @Input() astronaut = ''; mission = '<no mission announced>'; confirmed = false; announced = false; subscription: Subscription; constructor(private missionService: MissionService) { this.subscription = missionService.missionAnnounced$.subscribe( mission => { this.mission = mission; this.announced = true; this.confirmed = false; }); } confirm() { this.confirmed = true; this.missionService.confirmMission(this.astronaut); } ngOnDestroy() { // prevent memory leak when component destroyed this.subscription.unsubscribe(); } } ``` > Notice that this example captures the `subscription` and `unsubscribe()` when the `AstronautComponent` is destroyed. This is a memory-leak guard step. There is no actual risk in this application because the lifetime of a `AstronautComponent` is the same as the lifetime of the application itself. That *would not* always be true in a more complex application. > > You don't add this guard to the `MissionControlComponent` because, as the parent, it controls the lifetime of the `MissionService`. > > The *History* log demonstrates that messages travel in both directions between the parent `MissionControlComponent` and the `AstronautComponent` children, facilitated by the service: ### Test it for Parent and children communicate using a service Tests click buttons of both the parent `MissionControlComponent` and the `AstronautComponent` children and verify that the history meets expectations: ``` // ... it('should announce a mission', async () => { const missionControl = element(by.tagName('app-mission-control')); const announceButton = missionControl.all(by.tagName('button')).get(0); const history = missionControl.all(by.tagName('li')); await announceButton.click(); expect(await history.count()).toBe(1); expect(await history.get(0).getText()).toMatch(/Mission.* announced/); }); it('should confirm the mission by Lovell', async () => { await testConfirmMission(1, 'Lovell'); }); it('should confirm the mission by Haise', async () => { await testConfirmMission(3, 'Haise'); }); it('should confirm the mission by Swigert', async () => { await testConfirmMission(2, 'Swigert'); }); async function testConfirmMission(buttonIndex: number, astronaut: string) { const missionControl = element(by.tagName('app-mission-control')); const announceButton = missionControl.all(by.tagName('button')).get(0); const confirmButton = missionControl.all(by.tagName('button')).get(buttonIndex); const history = missionControl.all(by.tagName('li')); await announceButton.click(); await confirmButton.click(); expect(await history.count()).toBe(2); expect(await history.get(1).getText()).toBe(`${astronaut} confirmed the mission`); } // ... ``` [Back to top](component-interaction#top) Last reviewed on Mon Feb 28 2022
programming_docs
angular Upgrading from AngularJS to Angular Upgrading from AngularJS to Angular =================================== *Angular* is the name for the Angular of today and tomorrow. *AngularJS* is the name for all 1.x versions of Angular. AngularJS applications are great. Always consider the business case before moving to Angular. An important part of that case is the time and effort to get there. This guide describes the built-in tools for efficiently migrating AngularJS projects over to the Angular platform, a piece at a time. Some applications will be easier to upgrade than others, and there are many ways to make it easier for yourself. It is possible to prepare and align AngularJS applications with Angular even before beginning the upgrade process. These preparation steps are all about making the code more decoupled, more maintainable, and better aligned with modern development tools. That means in addition to making the upgrade easier, you will also improve the existing AngularJS applications. One of the keys to a successful upgrade is to do it incrementally, by running the two frameworks side by side in the same application, and porting AngularJS components to Angular one by one. This makes it possible to upgrade even large and complex applications without disrupting other business, because the work can be done collaboratively and spread over a period of time. The `upgrade` module in Angular has been designed to make incremental upgrading seamless. Preparation ----------- There are many ways to structure AngularJS applications. When you begin to upgrade these applications to Angular, some will turn out to be much easier to work with than others. There are a few key techniques and patterns that you can apply to future-proof applications even before you begin the migration. ### Follow the AngularJS Style Guide The [AngularJS Style Guide](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md "Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub") collects patterns and practices that have been proven to result in cleaner and more maintainable AngularJS applications. It contains a wealth of information about how to write and organize AngularJS code —and equally importantly— how **not** to write and organize AngularJS code. Angular is a reimagined version of the best parts of AngularJS. In that sense, its goals are the same as the Style Guide for AngularJS: To preserve the good parts of AngularJS, and to avoid the bad parts. There is a lot more to Angular than that of course, but this does mean that *following the style guide helps make your AngularJS application more closely aligned with Angular*. There are a few rules in particular that will make it much easier to do *an incremental upgrade* using the Angular `[upgrade/static](../api/upgrade/static)` module: | Rules | Details | | --- | --- | | [Rule of 1](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md#single-responsibility "Single Responsibility - Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub") | There should be one component per file. This not only makes components easy to navigate and find, but will also allow us to migrate them between languages and frameworks one at a time. In this example application, each controller, component, service, and filter is in its own source file. | | [Folders-by-Feature Structure](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md#folders-by-feature-structure "Folders-by-Feature Structure - Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub") [Modularity](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md#modularity "Modularity - Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub") | Define similar principles on a higher level of abstraction: Different parts of the application should reside in different directories and NgModules. | When an application is laid out feature per feature in this way, it can also be migrated one feature at a time. For applications that don't already look like this, applying the rules in the AngularJS style guide is a highly recommended preparation step. And this is not just for the sake of the upgrade - it is just solid advice in general! ### Using a Module Loader When you break application code down into one component per file, you often end up with a project structure with a large number of relatively small files. This is a much neater way to organize things than a small number of large files, but it doesn't work that well if you have to load all those files to the HTML page with `<script>` tags. Especially when you also have to maintain those tags in the correct order. That is why it is a good idea to start using a *module loader*. Using a module loader such as [SystemJS](https://github.com/systemjs/systemjs "systemjs/systemjs | GitHub"), [Webpack](https://webpack.github.io "webpack module bundler | GitHub"), or [Browserify](http://browserify.org "Browserify") allows us to use the built-in module systems of TypeScript or ES2015. You can use the `import` and `export` features that explicitly specify what code can and will be shared between different parts of the application. For ES5 applications you can use CommonJS style `require` and `module.exports` features. In both cases, the module loader will then take care of loading all the code the application needs in the correct order. When moving applications into production, module loaders also make it easier to package them all up into production bundles with batteries included. ### Migrating to TypeScript If part of the Angular upgrade plan is to also take TypeScript into use, it makes sense to bring in the TypeScript compiler even before the upgrade itself begins. This means there is one less thing to learn and think about during the actual upgrade. It also means you can start using TypeScript features in your AngularJS code. Since TypeScript is a superset of ECMAScript 2015, which in turn is a superset of ECMAScript 5, "switching" to TypeScript doesn't necessarily require anything more than installing the TypeScript compiler and renaming files from `*.js` to `*.ts`. But just doing that is not hugely useful or exciting, of course. Additional steps like the following can give us much more bang for the buck: * For applications that use a module loader, TypeScript imports and exports (which are really ECMAScript 2015 imports and exports) can be used to organize code into modules. * Type annotations can be gradually added to existing functions and variables to pin down their types and get benefits like build-time error checking, great autocompletion support and inline documentation. * JavaScript features new to ES2015, like arrow functions, `let`s and `const`s, default function parameters, and destructuring assignments can also be gradually added to make the code more expressive. * Services and controllers can be turned into *classes*. That way they'll be a step closer to becoming Angular service and component classes, which will make life easier after the upgrade. ### Using Component Directives In Angular, components are the main primitive from which user interfaces are built. You define the different portions of the UI as components and compose them into a full user experience. You can also do this in AngularJS, using *component directives*. These are directives that define their own templates, controllers, and input/output bindings - the same things that Angular components define. Applications built with component directives are much easier to migrate to Angular than applications built with lower-level features like `ng-controller`, `ng-include`, and scope inheritance. To be Angular compatible, an AngularJS component directive should configure these attributes: | Attributes | Details | | --- | --- | | `restrict: 'E'` | Components are usually used as elements. | | `scope: {}` | An isolate scope. In Angular, components are always isolated from their surroundings, and you should do this in AngularJS too. | | `bindToController: {}` | Component inputs and outputs should be bound to the controller instead of using the `$scope`. | | `controller` `controllerAs` | Components have their own controllers. | | `template` `templateUrl` | Components have their own templates. | Component directives may also use the following attributes: | Attributes | Details | | --- | --- | | `transclude: true/{}` | If the component needs to transclude content from elsewhere. | | `require` | If the component needs to communicate with the controller of some parent component. | Component directives **should not** use the following attributes: | Attributes (avoid) | Details | | --- | --- | | `compile` | This will not be supported in Angular. | | `replace: true` | Angular never replaces a component element with the component template. This attribute is also deprecated in AngularJS. | | `priority` `terminal` | While AngularJS components may use these, they are not used in Angular and it is better not to write code that relies on them. | An AngularJS component directive that is fully aligned with the Angular architecture may look something like this: ``` export function heroDetailDirective() { return { restrict: 'E', scope: {}, bindToController: { hero: '=', deleted: '&' }, template: ` <h2>{{$ctrl.hero.name}} details!</h2> <div><label>id: </label>{{$ctrl.hero.id}}</div> <button type="button" ng-click="$ctrl.onDelete()">Delete</button> `, controller: function HeroDetailController() { this.onDelete = () => { this.deleted({hero: this.hero}); }; }, controllerAs: '$ctrl' }; } ``` AngularJS 1.5 introduces the [component API](https://docs.angularjs.org/api/ng/type/angular.Module#component "component(name, options); - angular.Module | API | AngularJS") that makes it easier to define component directives like these. It is a good idea to use this API for component directives for several reasons: * It requires less boilerplate code. * It enforces the use of component best practices like `controllerAs`. * It has good default values for directive attributes like `scope` and `restrict`. The component directive example from above looks like this when expressed using the component API: ``` export const heroDetail = { bindings: { hero: '<', deleted: '&' }, template: ` <h2>{{$ctrl.hero.name}} details!</h2> <div><label>id: </label>{{$ctrl.hero.id}}</div> <button type="button" ng-click="$ctrl.onDelete()">Delete</button> `, controller: function HeroDetailController() { this.onDelete = () => { this.deleted(this.hero); }; } }; ``` Controller lifecycle hook methods `$onInit()`, `$onDestroy()`, and `$onChanges()` are another convenient feature that AngularJS 1.5 introduces. They all have nearly exact [equivalents in Angular](lifecycle-hooks "Lifecycle hooks | Angular"), so organizing component lifecycle logic around them will ease the eventual Angular upgrade process. Upgrading with ngUpgrade ------------------------ The ngUpgrade library in Angular is a very useful tool for upgrading anything but the smallest of applications. With it you can mix and match AngularJS and Angular components in the same application and have them interoperate seamlessly. That means you don't have to do the upgrade work all at once, since there is a natural coexistence between the two frameworks during the transition period. > The [end of life of AngularJS](https://blog.angular.io/finding-a-path-forward-with-angularjs-7e186fdd4429 "Finding a Path Forward with AngularJS | Angular Blog") is December 31st, 2021. With this event, ngUpgrade is now in a feature complete state. We will continue publishing security and bug fixes for ngUpgrade at least until December 31st, 2023. > > ### How ngUpgrade Works One of the primary tools provided by ngUpgrade is called the `[UpgradeModule](../api/upgrade/static/upgrademodule)`. This is a module that contains utilities for bootstrapping and managing hybrid applications that support both Angular and AngularJS code. When you use ngUpgrade, what you're really doing is *running both AngularJS and Angular at the same time*. All Angular code is running in the Angular framework, and AngularJS code in the AngularJS framework. Both of these are the actual, fully featured versions of the frameworks. There is no emulation going on, so you can expect to have all the features and natural behavior of both frameworks. What happens on top of this is that components and services managed by one framework can interoperate with those from the other framework. This happens in three main areas: Dependency injection, the DOM, and change detection. #### Dependency Injection Dependency injection is front and center in both AngularJS and Angular, but there are some key differences between the two frameworks in how it actually works. | AngularJS | Angular | | --- | --- | | Dependency injection tokens are always strings | Tokens [can have different types](dependency-injection "Dependency injection in Angular | Angular"). They are often classes. They may also be strings. | | There is exactly one injector. Even in multi-module applications, everything is poured into one big namespace. | There is a [tree hierarchy of injectors](hierarchical-dependency-injection "Hierarchical injectors | Angular"), with a root injector and an additional injector for each component. | Even accounting for these differences you can still have dependency injection interoperability. `[upgrade/static](../api/upgrade/static)` resolves the differences and makes everything work seamlessly: * You can make AngularJS services available for injection to Angular code by *upgrading* them. The same singleton instance of each service is shared between the frameworks. In Angular these services will always be in the *root injector* and available to all components. * You can also make Angular services available for injection to AngularJS code by *downgrading* them. Only services from the Angular root injector can be downgraded. Again, the same singleton instances are shared between the frameworks. When you register a downgraded service, you must explicitly specify a *string token* that you want to use in AngularJS. #### Components and the DOM In the DOM of a hybrid ngUpgrade application are components and directives from both AngularJS and Angular. These components communicate with each other by using the input and output bindings of their respective frameworks, which ngUpgrade bridges together. They may also communicate through shared injected dependencies, as described above. The key thing to understand about a hybrid application is that every element in the DOM is owned by exactly one of the two frameworks. The other framework ignores it. If an element is owned by AngularJS, Angular treats it as if it didn't exist, and vice versa. So normally a hybrid application begins life as an AngularJS application, and it is AngularJS that processes the root template, for example, the index.html. Angular then steps into the picture when an Angular component is used somewhere in an AngularJS template. The template of that component will then be managed by Angular, and it may contain any number of Angular components and directives. Beyond that, you may interleave the two frameworks. You always cross the boundary between the two frameworks by one of two ways: 1. By using a component from the other framework: An AngularJS template using an Angular component, or an Angular template using an AngularJS component. 2. By transcluding or projecting content from the other framework. ngUpgrade bridges the related concepts of AngularJS transclusion and Angular content projection together. Whenever you use a component that belongs to the other framework, a switch between framework boundaries occurs. However, that switch only happens to the elements in the template of that component. Consider a situation where you use an Angular component from AngularJS like this: ``` <a-component></a-component> ``` The DOM element `<a-component>` will remain to be an AngularJS managed element, because it is defined in an AngularJS template. That also means you can apply additional AngularJS directives to it, but *not* Angular directives. It is only in the template of the `<a-component>` where Angular steps in. This same rule also applies when you use AngularJS component directives from Angular. #### Change Detection The `scope.$apply()` is how AngularJS detects changes and updates data bindings. After every event that occurs, `scope.$apply()` gets called. This is done either automatically by the framework, or manually by you. In Angular things are different. While change detection still occurs after every event, no one needs to call `scope.$apply()` for that to happen. This is because all Angular code runs inside something called the [Angular zone](../api/core/ngzone "NgZone | Core - API | Angular"). Angular always knows when the code finishes, so it also knows when it should kick off change detection. The code itself doesn't have to call `scope.$apply()` or anything like it. In the case of hybrid applications, the `[UpgradeModule](../api/upgrade/static/upgrademodule)` bridges the AngularJS and Angular approaches. Here is what happens: * Everything that happens in the application runs inside the Angular zone. This is true whether the event originated in AngularJS or Angular code. The zone triggers Angular change detection after every event. * The `[UpgradeModule](../api/upgrade/static/upgrademodule)` will invoke the AngularJS `$rootScope.$apply()` after every turn of the Angular zone. This also triggers AngularJS change detection after every event. In practice, you do not need to call `$apply()`, regardless of whether it is in AngularJS or Angular. The `[UpgradeModule](../api/upgrade/static/upgrademodule)` does it for us. You *can* still call `$apply()` so there is no need to remove such calls from existing code. Those calls just trigger additional AngularJS change detection checks in a hybrid application. When you downgrade an Angular component and then use it from AngularJS, the inputs of the component will be watched using AngularJS change detection. When those inputs change, the corresponding properties in the component are set. You can also hook into the changes by implementing the [OnChanges](../api/core/onchanges "OnChanges | Core - API | Angular") interface in the component, just like you could if it hadn't been downgraded. Correspondingly, when you upgrade an AngularJS component and use it from Angular, all the bindings defined for `scope` (or `bindToController`) of the component directive will be hooked into Angular change detection. They will be treated as regular Angular inputs. Their values will be written to the scope (or controller) of the upgraded component when they change. ### Using UpgradeModule with Angular `NgModules` Both AngularJS and Angular have their own concept of modules to help organize an application into cohesive blocks of functionality. Their details are quite different in architecture and implementation. In AngularJS, you add Angular assets to the `angular.module` property. In Angular, you create one or more classes adorned with an `[NgModule](../api/core/ngmodule)` decorator that describes Angular assets in metadata. The differences blossom from there. In a hybrid application you run both versions of Angular at the same time. That means that you need at least one module each from both AngularJS and Angular. You will import `[UpgradeModule](../api/upgrade/static/upgrademodule)` inside the NgModule, and then use it for bootstrapping the AngularJS module. > For more information, see [NgModules](ngmodules "NgModules | Angular"). > > ### Bootstrapping hybrid applications To bootstrap a hybrid application, you must bootstrap each of the Angular and AngularJS parts of the application. You must bootstrap the Angular bits first and then ask the `[UpgradeModule](../api/upgrade/static/upgrademodule)` to bootstrap the AngularJS bits next. In an AngularJS application you have a root AngularJS module, which will also be used to bootstrap the AngularJS application. ``` angular.module('heroApp', []) .controller('MainCtrl', function() { this.message = 'Hello world'; }); ``` Pure AngularJS applications can be automatically bootstrapped by using an `ng-app` directive somewhere on the HTML page. But for hybrid applications, you manually bootstrap using the `[UpgradeModule](../api/upgrade/static/upgrademodule)`. Therefore, it is a good preliminary step to switch AngularJS applications to use the manual JavaScript [`angular.bootstrap`](https://docs.angularjs.org/api/ng/function/angular.bootstrap "angular.bootstrap | API | AngularJS") method even before switching them to hybrid mode. Say you have an `ng-app` driven bootstrap such as this one: ``` <!DOCTYPE HTML> <html lang="en"> <head> <base href="/"> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.3/angular.js"></script> <script src="app/ajs-ng-app/app.module.js"></script> </head> <body ng-app="heroApp" ng-strict-di> <div id="message" ng-controller="MainCtrl as mainCtrl"> {{ mainCtrl.message }} </div> </body> </html> ``` You can remove the `ng-app` and `ng-strict-di` directives from the HTML and instead switch to calling `angular.bootstrap` from JavaScript, which will result in the same thing: ``` angular.bootstrap(document.body, ['heroApp'], { strictDi: true }); ``` To begin converting your AngularJS application to a hybrid, you need to load the Angular framework. You can see how this can be done with SystemJS by following the instructions in [Setup for Upgrading to AngularJS](upgrade-setup "Setup for upgrading from AngularJS | Angular") for selectively copying code from the [QuickStart GitHub repository](https://github.com/angular/quickstart "angular/quickstart | GitHub"). You also need to install the `@angular/upgrade` package using `npm install @angular/upgrade --save` and add a mapping for the `@angular/upgrade/[static](../api/upgrade/static)` package: ``` '@angular/upgrade/static': 'npm:@angular/upgrade/fesm2015/static.mjs', ``` Next, create an `app.module.ts` file and add the following `[NgModule](../api/core/ngmodule)` class: ``` import { DoBootstrap, NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { UpgradeModule } from '@angular/upgrade/static'; @NgModule({ imports: [ BrowserModule, UpgradeModule ] }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true }); } } ``` This bare minimum `[NgModule](../api/core/ngmodule)` imports `[BrowserModule](../api/platform-browser/browsermodule)`, the module every Angular browser-based application must have. It also imports `[UpgradeModule](../api/upgrade/static/upgrademodule)` from `@angular/upgrade/[static](../api/upgrade/static)`, which exports providers that will be used for upgrading and downgrading services and components. In the constructor of the `AppModule`, use dependency injection to get a hold of the `[UpgradeModule](../api/upgrade/static/upgrademodule)` instance, and use it to bootstrap the AngularJS application in the `AppModule.ngDoBootstrap` method. The `upgrade.bootstrap` method takes the exact same arguments as [angular.bootstrap](https://docs.angularjs.org/api/ng/function/angular.bootstrap "angular.bootstrap | API | AngularJS"): > **NOTE**: You do not add a `bootstrap` declaration to the `@[NgModule](../api/core/ngmodule)` decorator, since AngularJS will own the root template of the application. > > Now you can bootstrap `AppModule` using the `platformBrowserDynamic.bootstrapModule` method. ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; platformBrowserDynamic().bootstrapModule(AppModule); ``` Congratulations. You're running a hybrid application. The existing AngularJS code works as before *and* you're ready to start adding Angular code. ### Using Angular Components from AngularJS Code Once you're running a hybrid app, you can start the gradual process of upgrading code. One of the more common patterns for doing that is to use an Angular component in an AngularJS context. This could be a completely new component or one that was previously AngularJS but has been rewritten for Angular. Say you have an Angular component that shows information about a hero: ``` import { Component } from '@angular/core'; @Component({ selector: 'hero-detail', template: ` <h2>Windstorm details!</h2> <div>id: 1</div> ` }) export class HeroDetailComponent { } ``` If you want to use this component from AngularJS, you need to *downgrade* it using the `[downgradeComponent](../api/upgrade/static/downgradecomponent)()` method. The result is an AngularJS *directive*, which you can then register in the AngularJS module: ``` import { HeroDetailComponent } from './hero-detail.component'; /* . . . */ import { downgradeComponent } from '@angular/upgrade/static'; angular.module('heroApp', []) .directive( 'heroDetail', downgradeComponent({ component: HeroDetailComponent }) as angular.IDirectiveFactory ); ``` > By default, Angular change detection will also run on the component for everyAngularJS `$digest` cycle. If you want to only have change detection run when the inputs change, you can set `propagateDigest` to `false` when calling `[downgradeComponent](../api/upgrade/static/downgradecomponent)()`. > > Because `HeroDetailComponent` is an Angular component, you must also add it to the `declarations` in the `AppModule`. ``` import { HeroDetailComponent } from './hero-detail.component'; @NgModule({ imports: [ BrowserModule, UpgradeModule ], declarations: [ HeroDetailComponent ] }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true }); } } ``` > All Angular components, directives, and pipes must be declared in an NgModule. > > The net result is an AngularJS directive called `heroDetail`, that you can use like any other directive in AngularJS templates. ``` <hero-detail></hero-detail> ``` > **NOTE**: This AngularJS is an element directive (`restrict: 'E'`) called `heroDetail`. An AngularJS element directive is matched based on its *name*. *The `selector` metadata of the downgraded Angular component is ignored*. > > Most components are not quite this simple, of course. Many of them have *inputs and outputs* that connect them to the outside world. An Angular hero detail component with inputs and outputs might look like this: ``` import { Component, EventEmitter, Input, Output } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'hero-detail', template: ` <h2>{{hero.name}} details!</h2> <div>id: {{hero.id}}</div> <button type="button" (click)="onDelete()">Delete</button> ` }) export class HeroDetailComponent { @Input() hero!: Hero; @Output() deleted = new EventEmitter<Hero>(); onDelete() { this.deleted.emit(this.hero); } } ``` These inputs and outputs can be supplied from the AngularJS template, and the `[downgradeComponent](../api/upgrade/static/downgradecomponent)()` method takes care of wiring them up: ``` <div ng-controller="MainController as mainCtrl"> <hero-detail [hero]="mainCtrl.hero" (deleted)="mainCtrl.onDelete($event)"> </hero-detail> </div> ``` Even though you are in an AngularJS template, **you are using Angular attribute syntax to bind the inputs and outputs**. This is a requirement for downgraded components. The expressions themselves are still regular AngularJS expressions. There is one notable exception to the rule of using Angular attribute syntax for downgraded components. It has to do with input or output names that consist of multiple words. In Angular, you would bind these attributes using camelCase: ``` [myHero]="hero" (heroDeleted)="handleHeroDeleted($event)" ``` But when using them from AngularJS templates, you must use kebab-case: ``` [my-hero]="hero" (hero-deleted)="handleHeroDeleted($event)" ``` The `$event` variable can be used in outputs to gain access to the object that was emitted. In this case it will be the `Hero` object, because that is what was passed to `this.deleted.emit()`. Since this is an AngularJS template, you can still use other AngularJS directives on the element, even though it has Angular binding attributes on it. For example, you can easily make multiple copies of the component using `ng-repeat`: ``` <div ng-controller="MainController as mainCtrl"> <hero-detail [hero]="hero" (deleted)="mainCtrl.onDelete($event)" ng-repeat="hero in mainCtrl.heroes"> </hero-detail> </div> ``` ### Using AngularJS Component Directives from Angular Code So, you can write an Angular component and then use it from AngularJS code. This is useful when you start to migrate from lower-level components and work your way up. But in some cases it is more convenient to do things in the opposite order: To start with higher-level components and work your way down. This too can be done using the `[upgrade/static](../api/upgrade/static)`. You can *upgrade* AngularJS component directives and then use them from Angular. Not all kinds of AngularJS directives can be upgraded. The directive really has to be a *component directive*, with the characteristics [described in the preparation guide above](upgrade#using-component-directives "Using Component Directives - Upgrading from AngularJS to Angular | Angular"). The safest bet for ensuring compatibility is using the [component API](https://docs.angularjs.org/api/ng/type/angular.Module "angular.Module | API | AngularJS") introduced in AngularJS 1.5. An example of an upgradeable component is one that just has a template and a controller: ``` export const heroDetail = { template: ` <h2>Windstorm details!</h2> <div><label>id: </label>1</div> `, controller: function HeroDetailController() { } }; ``` You can *upgrade* this component to Angular using the `[UpgradeComponent](../api/upgrade/static/upgradecomponent)` class. By creating a new Angular **directive** that extends `[UpgradeComponent](../api/upgrade/static/upgradecomponent)` and doing a `super` call inside its constructor, you have a fully upgraded AngularJS component to be used inside Angular. All that is left is to add it to the `declarations` array of `AppModule`. ``` import { Directive, ElementRef, Injector, SimpleChanges } from '@angular/core'; import { UpgradeComponent } from '@angular/upgrade/static'; @Directive({ selector: 'hero-detail' }) export class HeroDetailDirective extends UpgradeComponent { constructor(elementRef: ElementRef, injector: Injector) { super('heroDetail', elementRef, injector); } } ``` ``` @NgModule({ imports: [ BrowserModule, UpgradeModule ], declarations: [ HeroDetailDirective, /* . . . */ ] }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true }); } } ``` > Upgraded components are Angular **directives**, instead of **components**, because Angular is unaware that AngularJS will create elements under it. As far as Angular knows, the upgraded component is just a directive —a tag— and Angular doesn't have to concern itself with its children. > > An upgraded component may also have inputs and outputs, as defined by the scope/controller bindings of the original AngularJS component directive. When you use the component from an Angular template, provide the inputs and outputs using **Angular template syntax**, observing the following rules: | Bindings | Binding definition | Template syntax | | --- | --- | --- | | Attribute binding | `myAttribute: '@myAttribute'` | `<my-component myAttribute="value">` | | Expression binding | `myOutput: '&myOutput'` | `<my-component (myOutput)="action()">` | | One-way binding | `myValue: '<myValue'` | `<my-component [myValue]="anExpression">` | | Two-way binding | `myValue: '=myValue'` | As a two-way binding: `<my-component [(myValue)]="anExpression">` Since most AngularJS two-way bindings actually only need a one-way binding in practice, `<my-component [myValue]="anExpression">` is often enough. | For example, imagine a hero detail AngularJS component directive with one input and one output: ``` export const heroDetail = { bindings: { hero: '<', deleted: '&' }, template: ` <h2>{{$ctrl.hero.name}} details!</h2> <div><label>id: </label>{{$ctrl.hero.id}}</div> <button type="button" ng-click="$ctrl.onDelete()">Delete</button> `, controller: function HeroDetailController() { this.onDelete = () => { this.deleted(this.hero); }; } }; ``` You can upgrade this component to Angular, annotate inputs and outputs in the upgrade directive, and then provide the input and output using Angular template syntax: ``` import { Directive, ElementRef, Injector, Input, Output, EventEmitter } from '@angular/core'; import { UpgradeComponent } from '@angular/upgrade/static'; import { Hero } from '../hero'; @Directive({ selector: 'hero-detail' }) export class HeroDetailDirective extends UpgradeComponent { @Input() hero: Hero; @Output() deleted: EventEmitter<Hero>; constructor(elementRef: ElementRef, injector: Injector) { super('heroDetail', elementRef, injector); } } ``` ``` import { Component } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'my-container', template: ` <h1>Tour of Heroes</h1> <hero-detail [hero]="hero" (deleted)="heroDeleted($event)"> </hero-detail> ` }) export class ContainerComponent { hero = new Hero(1, 'Windstorm'); heroDeleted(hero: Hero) { hero.name = 'Ex-' + hero.name; } } ``` ### Projecting AngularJS Content into Angular Components When you are using a downgraded Angular component from an AngularJS template, the need may arise to *transclude* some content into it. This is also possible. While there is no such thing as transclusion in Angular, there is a very similar concept called *content projection*. `[upgrade/static](../api/upgrade/static)` is able to make these two features interoperate. Angular components that support content projection make use of an `[<ng-content>](../api/core/ng-content)` tag within them. Here is an example of such a component: ``` import { Component, Input } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'hero-detail', template: ` <h2>{{hero.name}}</h2> <div> <ng-content></ng-content> </div> ` }) export class HeroDetailComponent { @Input() hero!: Hero; } ``` When using the component from AngularJS, you can supply contents for it. Just like they would be transcluded in AngularJS, they get projected to the location of the `[<ng-content>](../api/core/ng-content)` tag in Angular: ``` <div ng-controller="MainController as mainCtrl"> <hero-detail [hero]="mainCtrl.hero"> <!-- Everything here will get projected --> <p>{{mainCtrl.hero.description}}</p> </hero-detail> </div> ``` > When AngularJS content gets projected inside an Angular component, it still remains in "AngularJS land" and is managed by the AngularJS framework. > > ### Transcluding Angular Content into AngularJS Component Directives Just as you can project AngularJS content into Angular components, you can *transclude* Angular content into AngularJS components, whenever you are using upgraded versions from them. When an AngularJS component directive supports transclusion, it may use the `ng-transclude` directive in its template to mark the transclusion point: ``` export const heroDetail = { bindings: { hero: '=' }, template: ` <h2>{{$ctrl.hero.name}}</h2> <div> <ng-transclude></ng-transclude> </div> `, transclude: true }; ``` If you upgrade this component and use it from Angular, you can populate the component tag with contents that will then get transcluded: ``` import { Component } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'my-container', template: ` <hero-detail [hero]="hero"> <!-- Everything here will get transcluded --> <p>{{hero.description}}</p> </hero-detail> ` }) export class ContainerComponent { hero = new Hero(1, 'Windstorm', 'Specific powers of controlling winds'); } ``` ### Making AngularJS Dependencies Injectable to Angular When running a hybrid app, you may encounter situations where you need to inject some AngularJS dependencies into your Angular code. Maybe you have some business logic still in AngularJS services. Maybe you want access to built-in services of AngularJS like `$location` or `$timeout`. In these situations, it is possible to *upgrade* an AngularJS provider to Angular. This makes it possible to then inject it somewhere in Angular code. For example, you might have a service called `HeroesService` in AngularJS: ``` import { Hero } from '../hero'; export class HeroesService { get() { return [ new Hero(1, 'Windstorm'), new Hero(2, 'Spiderman') ]; } } ``` You can upgrade the service using an Angular [factory provider][AioGuideDependencyInjectionProvidersFactoryProviders] that requests the service from the AngularJS `$injector`. Many developers prefer to declare the factory provider in a separate `ajs-upgraded-providers.ts` file so that they are all together, making it easier to reference them, create new ones and delete them once the upgrade is over. It is also recommended to export the `heroesServiceFactory` function so that Ahead-of-Time compilation can pick it up. > **NOTE**: The 'heroes' string inside the factory refers to the AngularJS `HeroesService`. It is common in AngularJS applications to choose a service name for the token, for example "heroes", and append the "Service" suffix to create the class name. > > ``` import { HeroesService } from './heroes.service'; export function heroesServiceFactory(i: any) { return i.get('heroes'); } export const heroesServiceProvider = { provide: HeroesService, useFactory: heroesServiceFactory, deps: ['$injector'] }; ``` You can then provide the service to Angular by adding it to the `@[NgModule](../api/core/ngmodule)`: ``` import { heroesServiceProvider } from './ajs-upgraded-providers'; @NgModule({ imports: [ BrowserModule, UpgradeModule ], providers: [ heroesServiceProvider ], /* . . . */ }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true }); } } ``` Then use the service inside your component by injecting it in the component constructor using its class as a type annotation: ``` import { Component } from '@angular/core'; import { HeroesService } from './heroes.service'; import { Hero } from '../hero'; @Component({ selector: 'hero-detail', template: ` <h2>{{hero.id}}: {{hero.name}}</h2> ` }) export class HeroDetailComponent { hero: Hero; constructor(heroes: HeroesService) { this.hero = heroes.get()[0]; } } ``` > In this example you upgraded a service class. You can use a TypeScript type annotation when you inject it. While it doesn't affect how the dependency is handled, it enables the benefits of static type checking. This is not required though, and any AngularJS service, factory, or provider can be upgraded. > > ### Making Angular Dependencies Injectable to AngularJS In addition to upgrading AngularJS dependencies, you can also *downgrade* Angular dependencies, so that you can use them from AngularJS. This can be useful when you start migrating services to Angular or creating new services in Angular while retaining components written in AngularJS. For example, you might have an Angular service called `Heroes`: ``` import { Injectable } from '@angular/core'; import { Hero } from '../hero'; @Injectable() export class Heroes { get() { return [ new Hero(1, 'Windstorm'), new Hero(2, 'Spiderman') ]; } } ``` Again, as with Angular components, register the provider with the `[NgModule](../api/core/ngmodule)` by adding it to the `providers` list of the module. ``` import { Heroes } from './heroes'; @NgModule({ imports: [ BrowserModule, UpgradeModule ], providers: [ Heroes ] }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true }); } } ``` Now wrap the Angular `Heroes` in an *AngularJS factory function* using `[downgradeInjectable](../api/upgrade/static/downgradeinjectable)()` and plug the factory into an AngularJS module. The name of the AngularJS dependency is up to you: ``` import { Heroes } from './heroes'; /* . . . */ import { downgradeInjectable } from '@angular/upgrade/static'; angular.module('heroApp', []) .factory('heroes', downgradeInjectable(Heroes)) .component('heroDetail', heroDetailComponent); ``` After this, the service is injectable anywhere in AngularJS code: ``` export const heroDetailComponent = { template: ` <h2>{{$ctrl.hero.id}}: {{$ctrl.hero.name}}</h2> `, controller: ['heroes', function(heroes: Heroes) { this.hero = heroes.get()[0]; }] }; ``` Lazy Loading AngularJS ---------------------- When building applications, you want to ensure that only the required resources are loaded when necessary. Whether that be loading of assets or code, making sure everything that can be deferred until needed keeps your application running efficiently. This is especially true when running different frameworks in the same application. [Lazy loading](glossary#lazy-loading "lazy loading - Glossary | Angular") is a technique that defers the loading of required assets and code resources until they are actually used. This reduces startup time and increases efficiency, especially when running different frameworks in the same application. When migrating large applications from AngularJS to Angular using a hybrid approach, you want to migrate some of the most commonly used features first, and only use the less commonly used features if needed. Doing so helps you ensure that the application is still providing a seamless experience for your users while you are migrating. In most environments where both Angular and AngularJS are used to render the application, both frameworks are loaded in the initial bundle being sent to the client. This results in both increased bundle size and possible reduced performance. Overall application performance is affected in cases where the user stays on Angular-rendered pages because the AngularJS framework and application are still loaded and running, even if they are never accessed. You can take steps to mitigate both bundle size and performance issues. By isolating your AngularJS application to a separate bundle, you can take advantage of [lazy loading](glossary#lazy-loading "lazy loading - Glossary | Angular") to load, bootstrap, and render the AngularJS application only when needed. This strategy reduces your initial bundle size, defers any potential impact from loading both frameworks until absolutely necessary, and keeps your application running as efficiently as possible. The steps below show you how to do the following: * Set up a callback function for your AngularJS bundle. * Create a service that lazy loads and bootstraps your AngularJS app. * Create a routable component for AngularJS content * Create a custom `matcher` function for AngularJS-specific URLs and configure the Angular `[Router](../api/router/router)` with the custom matcher for AngularJS routes. ### Create a service to lazy load AngularJS As of Angular version 8, lazy loading code can be accomplished by using the dynamic import syntax `import('...')`. In your application, you create a new service that uses dynamic imports to lazy load AngularJS. ``` import { Injectable } from '@angular/core'; import * as angular from 'angular'; @Injectable({ providedIn: 'root' }) export class LazyLoaderService { private app: angular.auto.IInjectorService | undefined; load(el: HTMLElement): void { import('./angularjs-app').then(app => { try { this.app = app.bootstrap(el); } catch (e) { console.error(e); } }); } destroy() { if (this.app) { this.app.get('$rootScope').$destroy(); } } } ``` The service uses the `import()` method to load your bundled AngularJS application lazily. This decreases the initial bundle size of your application as you're not loading code your user doesn't need yet. You also need to provide a way to *bootstrap* the application manually after it has been loaded. AngularJS provides a way to manually bootstrap an application using the [angular.bootstrap()](https://docs.angularjs.org/api/ng/function/angular.bootstrap "angular.bootstrap | API | AngularJS") method with a provided HTML element. Your AngularJS application should also expose a `bootstrap` method that bootstraps the AngularJS app. To ensure any necessary teardown is triggered in the AngularJS app, such as removal of global listeners, you also implement a method to call the `$rootScope.destroy()` method. ``` import * as angular from 'angular'; import 'angular-route'; const appModule = angular.module('myApp', [ 'ngRoute' ]) .config(['$routeProvider', '$locationProvider', function config($routeProvider: angular.route.IRouteProvider, $locationProvider: angular.ILocationProvider) { $locationProvider.html5Mode(true); $routeProvider. when('/users', { template: ` <p> Users Page </p> ` }). otherwise({ template: '' }); }] ); export function bootstrap(el: HTMLElement) { return angular.bootstrap(el, [appModule.name]); } ``` Your AngularJS application is configured with only the routes it needs to render content. The remaining routes in your application are handled by the Angular Router. The exposed `bootstrap` method is called in your Angular application to bootstrap the AngularJS application after the bundle is loaded. > **NOTE**: After AngularJS is loaded and bootstrapped, listeners such as those wired up in your route configuration will continue to listen for route changes. To ensure listeners are shut down when AngularJS isn't being displayed, configure an `otherwise` option with the [$routeProvider](https://docs.angularjs.org/api/ngRoute/provider/%24routeProvider "$routeProvider | API | AngularJS") that renders an empty template. This assumes all other routes will be handled by Angular. > > ### Create a component to render AngularJS content In your Angular application, you need a component as a placeholder for your AngularJS content. This component uses the service you create to load and bootstrap your AngularJS application after the component is initialized. ``` import { Component, OnInit, OnDestroy, ElementRef } from '@angular/core'; import { LazyLoaderService } from '../lazy-loader.service'; @Component({ selector: 'app-angular-js', template: '<div ng-view></div>' }) export class AngularJSComponent implements OnInit, OnDestroy { constructor( private lazyLoader: LazyLoaderService, private elRef: ElementRef ) {} ngOnInit() { this.lazyLoader.load(this.elRef.nativeElement); } ngOnDestroy() { this.lazyLoader.destroy(); } } ``` When the Angular Router matches a route that uses AngularJS, the `AngularJSComponent` is rendered, and the content is rendered within the AngularJS [`ng-view`](https://docs.angularjs.org/api/ngRoute/directive/ngView "ngView | API | AngularJS") directive. When the user navigates away from the route, the `$rootScope` is destroyed on the AngularJS application. ### Configure a custom route matcher for AngularJS routes To configure the Angular Router, you must define a route for AngularJS URLs. To match those URLs, you add a route configuration that uses the `matcher` property. The `matcher` allows you to use custom pattern matching for URL paths. The Angular Router tries to match on more specific routes such as static and variable routes first. When it doesn't find a match, it then looks at custom matchers defined in your route configuration. If the custom matchers don't match a route, it then goes to catch-all routes, such as a 404 page. The following example defines a custom matcher function for AngularJS routes. ``` export function isAngularJSUrl(url: UrlSegment[]) { return url.length > 0 && url[0].path.startsWith('users') ? ({consumed: url}) : null; } ``` The following code adds a route object to your routing configuration using the `matcher` property and custom matcher, and the `component` property with `AngularJSComponent`. ``` import { NgModule } from '@angular/core'; import { Routes, RouterModule, UrlSegment } from '@angular/router'; import { AngularJSComponent } from './angular-js/angular-js.component'; import { HomeComponent } from './home/home.component'; import { App404Component } from './app404/app404.component'; // Match any URL that starts with `users` export function isAngularJSUrl(url: UrlSegment[]) { return url.length > 0 && url[0].path.startsWith('users') ? ({consumed: url}) : null; } export const routes: Routes = [ // Routes rendered by Angular { path: '', component: HomeComponent }, // AngularJS routes { matcher: isAngularJSUrl, component: AngularJSComponent }, // Catch-all route { path: '**', component: App404Component } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } ``` When your application matches a route that needs AngularJS, the AngularJS application is loaded and bootstrapped, the AngularJS routes match the necessary URL to render their content, and your application continues to run with both AngularJS and Angular frameworks. Using the Unified Angular Location Service ------------------------------------------ In AngularJS, the [$location service](https://docs.angularjs.org/api/ng/service/%24location "$location | API | AngularJS") handles all routing configuration and navigation, encoding and decoding of URLS, redirects, and interactions with browser APIs. Angular uses its own underlying `[Location](../api/common/location)` service for all of these tasks. When you migrate from AngularJS to Angular you will want to move as much responsibility as possible to Angular, so that you can take advantage of new APIs. To help with the transition, Angular provides the `[LocationUpgradeModule](../api/common/upgrade/locationupgrademodule)`. This module enables a *unified* location service that shifts responsibilities from the AngularJS `$location` service to the Angular `[Location](../api/common/location)` service. To use the `[LocationUpgradeModule](../api/common/upgrade/locationupgrademodule)`, import the symbol from `@angular/common/upgrade` and add it to your `AppModule` imports using the static `[LocationUpgradeModule.config()](../api/common/upgrade/locationupgrademodule#config)` method. ``` // Other imports … import { LocationUpgradeModule } from '@angular/common/upgrade'; @NgModule({ imports: [ // Other NgModule imports… LocationUpgradeModule.config() ] }) export class AppModule {} ``` The `[LocationUpgradeModule.config()](../api/common/upgrade/locationupgrademodule#config)` method accepts a configuration object that allows you to configure options including the `[LocationStrategy](../api/common/locationstrategy)` with the `useHash` property, and the URL prefix with the `hashPrefix` property. The `useHash` property defaults to `false`, and the `hashPrefix` defaults to an empty `string`. Pass the configuration object to override the defaults. ``` LocationUpgradeModule.config({ useHash: true, hashPrefix: '!' }) ``` > **NOTE**: See the `[LocationUpgradeConfig](../api/common/upgrade/locationupgradeconfig)` for more configuration options available to the `[LocationUpgradeModule.config()](../api/common/upgrade/locationupgrademodule#config)` method. > > This registers a drop-in replacement for the `$location` provider in AngularJS. Once registered, all navigation, routing broadcast messages, and any necessary digest cycles in AngularJS triggered during navigation are handled by Angular. This gives you a single way to navigate within both sides of your hybrid application consistently. For usage of the `$location` service as a provider in AngularJS, you need to downgrade the `[$locationShim](../api/common/upgrade/%24locationshim)` using a factory provider. ``` // Other imports … import { $locationShim } from '@angular/common/upgrade'; import { downgradeInjectable } from '@angular/upgrade/static'; angular.module('myHybridApp', […]) .factory('$location', downgradeInjectable($locationShim)); ``` Once you introduce the Angular Router, using the Angular Router triggers navigations through the unified location service, still providing a single source for navigating with AngularJS and Angular. PhoneCat Upgrade Tutorial ------------------------- In this section, you'll learn to prepare and upgrade an application with `ngUpgrade`. The example application is [Angular PhoneCat](https://github.com/angular/angular-phonecat "angular/angular-phonecat | GitHub") from [the original AngularJS tutorial](https://docs.angularjs.org/tutorial "PhoneCat Tutorial App | Tutorial | AngularJS"), which is where many of us began our Angular adventures. Now you'll see how to bring that application to the brave new world of Angular. During the process you'll learn how to apply the steps outlined in the [preparation guide](upgrade#preparation "Preparation - Upgrading from AngularJS to Angular | Angular"). You'll align the application with Angular and also start writing in TypeScript. This tutorial is based on the 1.5.x version of the `angular-phonecat` tutorial, which is preserved in the [1.5-snapshot](https://github.com/angular/angular-phonecat/commits/1.5-snapshot "angular/angular-phonecat v1.5 | GitHub") branch of the repository. To follow along, clone the [angular-phonecat](https://github.com/angular/angular-phonecat "angular/angular-phonecat | GitHub") repository, check out the `1.5-snapshot` branch and apply the steps as you go. In terms of project structure, this is where the work begins: ``` angular-phonecat bower.json karma.conf.js package.json app core checkmark checkmark.filter.js checkmark.filter.spec.js phone phone.module.js phone.service.js phone.service.spec.js core.module.js phone-detail phone-detail.component.js phone-detail.component.spec.js phone-detail.module.js phone-detail.template.html phone-list phone-list.component.js phone-list.component.spec.js phone-list.module.js phone-list.template.html img … phones … app.animations.js app.config.js app.css app.module.js index.html e2e-tests protractor-conf.js scenarios.js ``` This is actually a pretty good starting point. The code uses the AngularJS 1.5 component API and the organization follows the [AngularJS Style Guide](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md "Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub"), which is an important [preparation step](upgrade#follow-the-angularjs-style-guide "Follow the AngularJS Style Guide - Upgrading from AngularJS to Angular | Angular") before a successful upgrade. * Each component, service, and filter is in its own source file, as per the [Rule of 1](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md#single-responsibility "Single Responsibility - Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub"). * The `core`, `phone-detail`, and `phone-list` modules are each in their own subdirectory. Those subdirectories contain the JavaScript code as well as the HTML templates that go with each particular feature. This is in line with the [Folders-by-Feature Structure](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md#folders-by-feature-structure "Folders-by-Feature Structure - Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub") and [Modularity](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md#modularity "Modularity - Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub") rules. * Unit tests are located side-by-side with application code where they are easily found, as described in the rules for [Organizing Tests](https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md#organizing-tests "Organizing Tests - Angular 1 Style Guide | johnpapa/angular-styleguide | GitHub"). ### Switching to TypeScript Since you're going to be writing Angular code in TypeScript, it makes sense to bring in the TypeScript compiler even before you begin upgrading. You'll also start to gradually phase out the Bower package manager in favor of NPM, installing all new dependencies using NPM, and eventually removing Bower from the project. Begin by installing TypeScript to the project. ``` npm i typescript --save-dev ``` Install type definitions for the existing libraries that you're using but that don't come with prepackaged types: AngularJS, AngularJS Material, and the Jasmine unit test framework. For the PhoneCat app, we can install the necessary type definitions by running the following command: ``` npm install @types/jasmine @types/angular @types/angular-animate @types/angular-aria @types/angular-cookies @types/angular-mocks @types/angular-resource @types/angular-route @types/angular-sanitize --save-dev ``` If you are using AngularJS Material, you can install the type definitions via: ``` npm install @types/angular-material --save-dev ``` You should also configure the TypeScript compiler with a `tsconfig.json` in the project directory as described in the [TypeScript Configuration](typescript-configuration "TypeScript configuration | Angular") guide. The `tsconfig.json` file tells the TypeScript compiler how to turn your TypeScript files into ES5 code bundled into CommonJS modules. Finally, you should add some npm scripts in `package.json` to compile the TypeScript files to JavaScript (based on the `tsconfig.json` configuration file): ``` "scripts": { "tsc": "tsc", "tsc:w": "tsc -w", … ``` Now launch the TypeScript compiler from the command line in watch mode: ``` npm run tsc:w ``` Keep this process running in the background, watching and recompiling as you make changes. Next, convert your current JavaScript files into TypeScript. Since TypeScript is a super-set of ECMAScript 2015, which in turn is a super-set of ECMAScript 5, you can switch the file extensions from `.js` to `.ts` and everything will work just like it did before. As the TypeScript compiler runs, it emits the corresponding `.js` file for every `.ts` file and the compiled JavaScript is what actually gets executed. If you start the project HTTP server with `npm start`, you should see the fully functional application in your browser. Now that you have TypeScript though, you can start benefiting from some of its features. There is a lot of value the language can provide to AngularJS applications. For one thing, TypeScript is a superset of ES2015. Any application that has previously been written in ES5 —like the PhoneCat example has— can with TypeScript start incorporating all of the JavaScript features that are new to ES2015. These include things like `let`s and `const`s, arrow functions, default function parameters, and destructuring assignments. Another thing you can do is start adding *type safety* to your code. This has actually partially already happened because of the AngularJS typings you installed. TypeScript are checking that you are calling AngularJS APIs correctly when you do things like register components to Angular modules. But you can also start adding *type annotations* to get even more out of type system of TypeScript. For instance, you can annotate the checkmark filter so that it explicitly expects booleans as arguments. This makes it clearer what the filter is supposed to do. ``` angular. module('core'). filter('checkmark', () => (input: boolean) => input ? '\u2713' : '\u2718'); ``` In the `Phone` service, you can explicitly annotate the `$resource` service dependency as an `angular.resource.IResourceService` - a type defined by the AngularJS typings. ``` angular. module('core.phone'). factory('Phone', ['$resource', ($resource: angular.resource.IResourceService) => $resource('phones/:phoneId.json', {}, { query: { method: 'GET', params: {phoneId: 'phones'}, isArray: true } }) ]); ``` You can apply the same trick to the route configuration file of the application in `app.config.ts`, where you are using the location and route services. By annotating them accordingly TypeScript can verify you're calling their APIs with the correct kinds of arguments. ``` angular. module('phonecatApp'). config(['$locationProvider', '$routeProvider', function config($locationProvider: angular.ILocationProvider, $routeProvider: angular.route.IRouteProvider) { $locationProvider.hashPrefix('!'); $routeProvider. when('/phones', { template: '<phone-list></phone-list>' }). when('/phones/:phoneId', { template: '<phone-detail></phone-detail>' }). otherwise('/phones'); } ]); ``` > The [AngularJS 1.x type definitions](https://www.npmjs.com/package/@types/angular "@types/angular | npm") you installed are not officially maintained by the Angular team, but are quite comprehensive. It is possible to make an AngularJS 1.x application fully type-annotated with the help of these definitions. > > If this is something you wanted to do, it would be a good idea to enable the `noImplicitAny` configuration option in `tsconfig.json`. This would cause the TypeScript compiler to display a warning when there is any code that does not yet have type annotations. You could use it as a guide to inform us about how close you are to having a fully annotated project. > > Another TypeScript feature you can make use of is *classes*. In particular, you can turn component controllers into classes. That way they'll be a step closer to becoming Angular component classes, which will make life easier once you upgrade. AngularJS expects controllers to be constructor functions. That is exactly what ES2015/TypeScript classes are under the hood, so that means you can just plug in a class as a component controller and AngularJS will happily use it. Here is what the new class for the phone list component controller looks like: ``` class PhoneListController { phones: any[]; orderProp: string; query: string; static $inject = ['Phone']; constructor(Phone: any) { this.phones = Phone.query(); this.orderProp = 'age'; } } angular. module('phoneList'). component('phoneList', { templateUrl: 'phone-list/phone-list.template.html', controller: PhoneListController }); ``` What was previously done in the controller function is now done in the class constructor function. The dependency injection annotations are attached to the class using a static property `$inject`. At runtime this becomes the `PhoneListController.$inject` property. The class additionally declares three members: The array of phones, the name of the current sort key, and the search query. These are all things you have already been attaching to the controller but that weren't explicitly declared anywhere. The last one of these isn't actually used in the TypeScript code since it is only referred to in the template, but for the sake of clarity you should define all of the controller members. In the Phone detail controller, you'll have two members: One for the phone that the user is looking at and another for the URL of the currently displayed image: ``` class PhoneDetailController { phone: any; mainImageUrl: string; static $inject = ['$routeParams', 'Phone']; constructor($routeParams: angular.route.IRouteParamsService, Phone: any) { const phoneId = $routeParams.phoneId; this.phone = Phone.get({phoneId}, (phone: any) => { this.setImage(phone.images[0]); }); } setImage(imageUrl: string) { this.mainImageUrl = imageUrl; } } angular. module('phoneDetail'). component('phoneDetail', { templateUrl: 'phone-detail/phone-detail.template.html', controller: PhoneDetailController }); ``` This makes the controller code look a lot more like Angular already. You're all set to actually introduce Angular into the project. If you had any AngularJS services in the project, those would also be a good candidate for converting to classes, since like controllers, they're also constructor functions. But you only have the `Phone` factory in this project, and that is a bit special since it is an `ngResource` factory. So you won't be doing anything to it in the preparation stage. You'll instead turn it directly into an Angular service. ### Installing Angular Having completed the preparation work, get going with the Angular upgrade of PhoneCat. You'll do this incrementally with the help of [ngUpgrade](upgrade#upgrading-with-ngupgrade "Upgrading with ngUpgrade - Upgrading from AngularJS to Angular | Angular") that comes with Angular. By the time you're done, you'll be able to remove AngularJS from the project completely, but the key is to do this piece by piece without breaking the application. > The project also contains some animations. You won't upgrade them in this version of the guide. Turn to the [Angular animations](animations "Introduction to Angular animations | Angular") guide to learn about that. > > Install Angular into the project, along with the SystemJS module loader. Take a look at the results of the [upgrade setup instructions](upgrade-setup "Setup for upgrading from AngularJS | Angular") and get the following configurations from there: * Add Angular and the other new dependencies to `package.json` * The SystemJS configuration file `systemjs.config.js` to the project root directory. Once these are done, run: ``` npm install ``` Soon you can load Angular dependencies into the application inside `index.html`, but first you need to do some directory path adjustments. You'll need to load files from `node_modules` and the project root instead of from the `/app` directory as you've been doing to this point. Move the `app/index.html` file to the project root directory. Then change the development server root path in `package.json` to also point to the project root instead of `app`: ``` "start": "http-server ./ -a localhost -p 8000 -c-1", ``` Now you're able to serve everything from the project root to the web browser. But you do *not* want to have to change all the image and data paths used in the application code to match the development setup. For that reason, you'll add a `<base>` tag to `index.html`, which will cause relative URLs to be resolved back to the `/app` directory: ``` <base href="/app/"> ``` Now you can load Angular using SystemJS. You'll add the Angular polyfills and the SystemJS configuration to the end of the `<head>` section, and then you'll use `System.import` to load the actual application: ``` <script src="/node_modules/core-js/client/shim.min.js"></script> <script src="/node_modules/zone.js/bundles/zone.umd.js"></script> <script src="/node_modules/systemjs/dist/system.src.js"></script> <script src="/systemjs.config.js"></script> <script> System.import('/app'); </script> ``` You also need to make a couple of adjustments to the `systemjs.config.js` file installed during [upgrade setup](upgrade-setup "Setup for upgrading from AngularJS | Angular"). Point the browser to the project root when loading things through SystemJS, instead of using the `<base>` URL. Install the `upgrade` package using `npm install @angular/upgrade --save` and add a mapping for the `@angular/upgrade/[static](../api/upgrade/static)` package. ``` System.config({ paths: { // paths serve as alias 'npm:': '/node_modules/' }, map: { 'ng-loader': '../src/systemjs-angular-loader.js', app: '/app', /* . . . */ '@angular/upgrade/static': 'npm:@angular/upgrade/fesm2015/static.mjs', /* . . . */ }, ``` ### Creating the `AppModule` Now create the root `[NgModule](../api/core/ngmodule)` class called `AppModule`. There is already a file named `app.module.ts` that holds the AngularJS module. Rename it to `app.module.ajs.ts` and update the corresponding script name in the `index.html` as well. The file contents remain: ``` // Define the `phonecatApp` AngularJS module angular.module('phonecatApp', [ 'ngAnimate', 'ngRoute', 'core', 'phoneDetail', 'phoneList', ]); ``` Now create a new `app.module.ts` with the minimum `[NgModule](../api/core/ngmodule)` class: ``` import { DoBootstrap, NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; @NgModule({ imports: [ BrowserModule, ], }) export class AppModule implements DoBootstrap { } ``` ### Bootstrapping a hybrid PhoneCat Next, you'll bootstrap the application as a *hybrid application* that supports both AngularJS and Angular components. After that, you can start converting the individual pieces to Angular. The application is currently bootstrapped using the AngularJS `ng-app` directive attached to the `<html>` element of the host page. This will no longer work in the hybrid application. Switch to the [ngUpgrade bootstrap](upgrade#bootstrapping-hybrid-applications "Bootstrapping hybrid applications - Upgrading from AngularJS to Angular | Angular") method instead. First, remove the `ng-app` attribute from `index.html`. Then import `[UpgradeModule](../api/upgrade/static/upgrademodule)` in the `AppModule`, and override its `ngDoBootstrap` method: ``` import { UpgradeModule } from '@angular/upgrade/static'; @NgModule({ imports: [ BrowserModule, UpgradeModule, ], }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.documentElement, ['phonecatApp']); } } ``` You are bootstrapping the AngularJS module from inside `ngDoBootstrap`. The arguments are the same as you would pass to `angular.bootstrap` if you were manually bootstrapping AngularJS: the root element of the application; and an array of the AngularJS 1.x modules that you want to load. Finally, bootstrap the `AppModule` in `app/main.ts`. This file has been configured as the application entrypoint in `systemjs.config.js`, so it is already being loaded by the browser. ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app.module'; platformBrowserDynamic().bootstrapModule(AppModule); ``` Now you're running both AngularJS and Angular at the same time. That is pretty exciting! You're not running any actual Angular components yet. That is next. `@types/angular` is declared as a UMD module, and due to the way [UMD typings](https://github.com/Microsoft/TypeScript/wiki/What's-new-in-TypeScript#support-for-umd-module-definitions "Support for UMD module definitions - What's new in TypeScript | microsoft/TypeScript | GitHub") work, once you have an ES6 `import` statement in a file all UMD typed modules must also be imported using `import` statements instead of being globally available. AngularJS is currently loaded by a script tag in `index.html`, which means that the whole app has access to it as a global and uses the same instance of the `angular` variable. If you used `import * as angular from 'angular'` instead, you'd also have to load every file in the AngularJS application to use ES2015 modules in order to ensure AngularJS was being loaded correctly. This is a considerable effort and it often isn't worth it, especially since you are in the process of moving your code to Angular. Instead, declare `angular` as `angular.IAngularStatic` to indicate it is a global variable and still have full typing support. Starting with Angular version 13, the [distribution format](https://github.com/angular/angular/issues/38366 " Issue 38366: RFC: Ivy Library Distribution| angular/angular | GitHub") no longer includes UMD bundles. If your use case requires the UMD format, use [`rollup`](https://rollupjs.org "rollup.js") to manually produce a bundle from the flat ES module files. 1. Use `npm` to globally install `rollup` ``` npm i -g rollup ``` 2. Output the version of `rollup` and verify the installation was successful ``` rollup -v ``` 3. Create the `rollup.config.js` configuration file for `rollup` to use the global `ng` command to reference all of the Angular framework exports. 1. Create a file named `rollup.config.js` 2. Copy the following content into `rollup.config.js` ``` export default { input: 'node_modules/@angular/core/fesm2015/core.js', output: { file: 'bundle.js', format: 'umd', name: 'ng' } } ``` 4. Use `rollup` to create the `bundle.js` UMD bundle using settings in `rollup.config.js` ``` rollup -c rollup.config.js ``` The `bundle.js` file contains your UMD bundle. For an example on GitHub, see [UMD Angular bundle](https://github.com/mgechev/angular-umd-bundle "UMD Angular bundle | mgechev/angular-umd-bundle | GitHub"). ### Upgrading the Phone service The first piece you'll port over to Angular is the `Phone` service, which resides in `app/core/phone/phone.service.ts` and makes it possible for components to load phone information from the server. Right now it is implemented with ngResource and you're using it for two things: * For loading the list of all phones into the phone list component * For loading the details of a single phone into the phone detail component You can replace this implementation with an Angular service class, while keeping the controllers in AngularJS land. In the new version, you import the Angular HTTP module and call its `[HttpClient](../api/common/http/httpclient)` service instead of `ngResource`. Re-open the `app.module.ts` file, import and add `[HttpClientModule](../api/common/http/httpclientmodule)` to the `imports` array of the `AppModule`: ``` import { HttpClientModule } from '@angular/common/http'; @NgModule({ imports: [ BrowserModule, UpgradeModule, HttpClientModule, ], }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.documentElement, ['phonecatApp']); } } ``` Now you're ready to upgrade the Phone service itself. Replace the ngResource-based service in `phone.service.ts` with a TypeScript class decorated as `@[Injectable](../api/core/injectable)`: ``` @Injectable() export class Phone { /* . . . */ } ``` The `@[Injectable](../api/core/injectable)` decorator will attach some dependency injection metadata to the class, letting Angular know about its dependencies. As described by the [Dependency Injection Guide](dependency-injection "Dependency injection in Angular | Angular"), this is a marker decorator you need to use for classes that have no other Angular decorators but still need to have their dependencies injected. In its constructor the class expects to get the `[HttpClient](../api/common/http/httpclient)` service. It will be injected to it and it is stored as a private field. The service is then used in the two instance methods, one of which loads the list of all phones, and the other loads the details of a specified phone: ``` @Injectable() export class Phone { constructor(private http: HttpClient) { } query(): Observable<PhoneData[]> { return this.http.get<PhoneData[]>(`phones/phones.json`); } get(id: string): Observable<PhoneData> { return this.http.get<PhoneData>(`phones/${id}.json`); } } ``` The methods now return observables of type `PhoneData` and `PhoneData[]`. This is a type you don't have yet. Add a simple interface for it: ``` export interface PhoneData { name: string; snippet: string; images: string[]; } ``` `@angular/upgrade/[static](../api/upgrade/static)` has a `[downgradeInjectable](../api/upgrade/static/downgradeinjectable)` method for the purpose of making Angular services available to AngularJS code. Use it to plug in the `Phone` service: ``` declare const angular: angular.IAngularStatic; import { downgradeInjectable } from '@angular/upgrade/static'; /* . . . */ @Injectable() export class Phone { /* . . . */ } angular.module('core.phone') .factory('phone', downgradeInjectable(Phone)); ``` Here is the full, final code for the service: ``` import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Observable } from 'rxjs'; declare const angular: angular.IAngularStatic; import { downgradeInjectable } from '@angular/upgrade/static'; export interface PhoneData { name: string; snippet: string; images: string[]; } @Injectable() export class Phone { constructor(private http: HttpClient) { } query(): Observable<PhoneData[]> { return this.http.get<PhoneData[]>(`phones/phones.json`); } get(id: string): Observable<PhoneData> { return this.http.get<PhoneData>(`phones/${id}.json`); } } angular.module('core.phone') .factory('phone', downgradeInjectable(Phone)); ``` Notice that you're importing the `map` operator of the RxJS `Observable` separately. Do this for every RxJS operator. The new `Phone` service has the same features as the original, `ngResource`-based service. Because it is an Angular service, you register it with the `[NgModule](../api/core/ngmodule)` providers: ``` import { Phone } from './core/phone/phone.service'; @NgModule({ imports: [ BrowserModule, UpgradeModule, HttpClientModule, ], providers: [ Phone, ] }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.documentElement, ['phonecatApp']); } } ``` Now that you are loading `phone.service.ts` through an import that is resolved by SystemJS, you should **remove the <script> tag** for the service from `index.html`. This is something you'll do to all components as you upgrade them. Simultaneously with the AngularJS to Angular upgrade you're also migrating code from scripts to modules. At this point, you can switch the two components to use the new service instead of the old one. While you `$inject` it as the downgraded `phone` factory, it is really an instance of the `Phone` class and you annotate its type accordingly: ``` declare const angular: angular.IAngularStatic; import { Phone, PhoneData } from '../core/phone/phone.service'; class PhoneListController { phones: PhoneData[]; orderProp: string; static $inject = ['phone']; constructor(phone: Phone) { phone.query().subscribe(phones => { this.phones = phones; }); this.orderProp = 'age'; } } angular. module('phoneList'). component('phoneList', { templateUrl: 'app/phone-list/phone-list.template.html', controller: PhoneListController }); ``` ``` declare const angular: angular.IAngularStatic; import { Phone, PhoneData } from '../core/phone/phone.service'; class PhoneDetailController { phone: PhoneData; mainImageUrl: string; static $inject = ['$routeParams', 'phone']; constructor($routeParams: angular.route.IRouteParamsService, phone: Phone) { const phoneId = $routeParams.phoneId; phone.get(phoneId).subscribe(data => { this.phone = data; this.setImage(data.images[0]); }); } setImage(imageUrl: string) { this.mainImageUrl = imageUrl; } } angular. module('phoneDetail'). component('phoneDetail', { templateUrl: 'phone-detail/phone-detail.template.html', controller: PhoneDetailController }); ``` Now there are two AngularJS components using an Angular service! The components don't need to be aware of this, though the fact that the service returns observables and not promises is a bit of a giveaway. In any case, what you've achieved is a migration of a service to Angular without having to yet migrate the components that use it. > You could use the `toPromise` method of `Observable` to turn those observables into promises in the service. In many cases that reduce the number of changes to the component controllers. > > ### Upgrading Components Upgrade the AngularJS components to Angular components next. Do it one component at a time while still keeping the application in hybrid mode. As you make these conversions, you'll also define your first Angular *pipes*. Look at the phone list component first. Right now it contains a TypeScript controller class and a component definition object. You can morph this into an Angular component by just renaming the controller class and turning the AngularJS component definition object into an Angular `@[Component](../api/core/component)` decorator. You can then also remove the static `$inject` property from the class: ``` import { Component } from '@angular/core'; import { Phone, PhoneData } from '../core/phone/phone.service'; @Component({ selector: 'phone-list', templateUrl: './phone-list.template.html' }) export class PhoneListComponent { phones: PhoneData[]; query: string; orderProp: string; constructor(phone: Phone) { phone.query().subscribe(phones => { this.phones = phones; }); this.orderProp = 'age'; } /* . . . */ } ``` The `selector` attribute is a CSS selector that defines where on the page the component should go. In AngularJS you do matching based on component names, but in Angular you have these explicit selectors. This one will match elements with the name `phone-list`, just like the AngularJS version did. Now convert the template of this component into Angular syntax. The search controls replace the AngularJS `$ctrl` expressions with the two-way `[([ngModel](../api/forms/ngmodel))]` binding syntax of Angular: ``` <p> Search: <input [(ngModel)]="query" /> </p> <p> Sort by: <select [(ngModel)]="orderProp"> <option value="name">Alphabetical</option> <option value="age">Newest</option> </select> </p> ``` Replace the `ng-repeat` of the list with an `*[ngFor](../api/common/ngfor)` as [described in the Template Syntax page](built-in-directives "Built-in directives | Angular"). Replace the `ng-src` of the image tag with a binding to the native `src` property. ``` <ul class="phones"> <li *ngFor="let phone of getPhones()" class="thumbnail phone-list-item"> <a href="/#!/phones/{{phone.id}}" class="thumb"> <img [src]="phone.imageUrl" [alt]="phone.name" /> </a> <a href="/#!/phones/{{phone.id}}" class="name">{{phone.name}}</a> <p>{{phone.snippet}}</p> </li> </ul> ``` #### No Angular `filter` or `orderBy` filters The built-in AngularJS `filter` and `orderBy` filters do not exist in Angular, so you need to do the filtering and sorting yourself. You replaced the `filter` and `orderBy` filters with bindings to the `getPhones()` controller method, which implements the filtering and ordering logic inside the component itself. ``` getPhones(): PhoneData[] { return this.sortPhones(this.filterPhones(this.phones)); } private filterPhones(phones: PhoneData[]) { if (phones && this.query) { return phones.filter(phone => { const name = phone.name.toLowerCase(); const snippet = phone.snippet.toLowerCase(); return name.indexOf(this.query) >= 0 || snippet.indexOf(this.query) >= 0; }); } return phones; } private sortPhones(phones: PhoneData[]) { if (phones && this.orderProp) { return phones .slice(0) // Make a copy .sort((a, b) => { if (a[this.orderProp] < b[this.orderProp]) { return -1; } else if ([b[this.orderProp] < a[this.orderProp]]) { return 1; } else { return 0; } }); } return phones; } ``` Now you need to downgrade the Angular component so you can use it in AngularJS. Instead of registering a component, you register a `phoneList` *directive*, a downgraded version of the Angular component. The `as angular.IDirectiveFactory` cast tells the TypeScript compiler that the return value of the `[downgradeComponent](../api/upgrade/static/downgradecomponent)` method is a directive factory. ``` declare const angular: angular.IAngularStatic; import { downgradeComponent } from '@angular/upgrade/static'; /* . . . */ @Component({ selector: 'phone-list', templateUrl: './phone-list.template.html' }) export class PhoneListComponent { /* . . . */ } angular.module('phoneList') .directive( 'phoneList', downgradeComponent({component: PhoneListComponent}) as angular.IDirectiveFactory ); ``` The new `PhoneListComponent` uses the Angular `[ngModel](../api/forms/ngmodel)` directive, located in the `[FormsModule](../api/forms/formsmodule)`. Add the `[FormsModule](../api/forms/formsmodule)` to `[NgModule](../api/core/ngmodule)` imports and declare the new `PhoneListComponent` since you downgraded it: ``` import { FormsModule } from '@angular/forms'; import { PhoneListComponent } from './phone-list/phone-list.component'; @NgModule({ imports: [ BrowserModule, UpgradeModule, HttpClientModule, FormsModule, ], declarations: [ PhoneListComponent, ], }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.documentElement, ['phonecatApp']); } } ``` Remove the <script> tag for the phone list component from `index.html`. Now set the remaining `phone-detail.component.ts` as follows: ``` declare const angular: angular.IAngularStatic; import { downgradeComponent } from '@angular/upgrade/static'; import { Component } from '@angular/core'; import { Phone, PhoneData } from '../core/phone/phone.service'; import { RouteParams } from '../ajs-upgraded-providers'; @Component({ selector: 'phone-detail', templateUrl: './phone-detail.template.html', }) export class PhoneDetailComponent { phone: PhoneData; mainImageUrl: string; constructor(routeParams: RouteParams, phone: Phone) { phone.get(routeParams.phoneId).subscribe(data => { this.phone = data; this.setImage(data.images[0]); }); } setImage(imageUrl: string) { this.mainImageUrl = imageUrl; } } angular.module('phoneDetail') .directive( 'phoneDetail', downgradeComponent({component: PhoneDetailComponent}) as angular.IDirectiveFactory ); ``` This is similar to the phone list component. The new wrinkle is the `RouteParams` type annotation that identifies the `routeParams` dependency. The AngularJS injector has an AngularJS router dependency called `$routeParams`, which was injected into `PhoneDetails` when it was still an AngularJS controller. You intend to inject it into the new `PhoneDetailsComponent`. Unfortunately, AngularJS dependencies are not automatically available to Angular components. You must upgrade this service using a [factory provider](upgrade#making-angularjs-dependencies-injectable-to-angular "Making AngularJS Dependencies Injectable to Angular - Upgrading from AngularJS to Angular | Angular") to make `$routeParams` an Angular injectable. Do that in a new file called `ajs-upgraded-providers.ts` and import it in `app.module.ts`: ``` export abstract class RouteParams { [key: string]: string; } export function routeParamsFactory(i: any) { return i.get('$routeParams'); } export const routeParamsProvider = { provide: RouteParams, useFactory: routeParamsFactory, deps: ['$injector'] }; ``` ``` import { routeParamsProvider } from './ajs-upgraded-providers'; providers: [ Phone, routeParamsProvider ] ``` Convert the phone detail component template into Angular syntax as follows: ``` <div *ngIf="phone"> <div class="phone-images"> <img [src]="img" class="phone" alt="Phone {{ phone.name }} - thumbnail {{ index }}" [ngClass]="{'selected': img === mainImageUrl}" *ngFor="let img of phone.images; let index = index;" /> </div> <h1>{{phone.name}}</h1> <p>{{phone.description}}</p> <ul class="phone-thumbs"> <li *ngFor="let img of phone.images; let index = index"> <img [src]="img" (click)="setImage(img)" alt="Phone {{ phone.name }} - thumbnail {{ index }}"/> </li> </ul> <ul class="specs"> <li> <span>Availability and Networks</span> <dl> <dt>Availability</dt> <dd *ngFor="let availability of phone.availability">{{availability}}</dd> </dl> </li> <li> <span>Battery</span> <dl> <dt>Type</dt> <dd>{{phone.battery?.type}}</dd> <dt>Talk Time</dt> <dd>{{phone.battery?.talkTime}}</dd> <dt>Standby time (max)</dt> <dd>{{phone.battery?.standbyTime}}</dd> </dl> </li> <li> <span>Storage and Memory</span> <dl> <dt>RAM</dt> <dd>{{phone.storage?.ram}}</dd> <dt>Internal Storage</dt> <dd>{{phone.storage?.flash}}</dd> </dl> </li> <li> <span>Connectivity</span> <dl> <dt>Network Support</dt> <dd>{{phone.connectivity?.cell}}</dd> <dt>WiFi</dt> <dd>{{phone.connectivity?.wifi}}</dd> <dt>Bluetooth</dt> <dd>{{phone.connectivity?.bluetooth}}</dd> <dt>Infrared</dt> <dd>{{phone.connectivity?.infrared | checkmark}}</dd> <dt>GPS</dt> <dd>{{phone.connectivity?.gps | checkmark}}</dd> </dl> </li> <li> <span>Android</span> <dl> <dt>OS Version</dt> <dd>{{phone.android?.os}}</dd> <dt>UI</dt> <dd>{{phone.android?.ui}}</dd> </dl> </li> <li> <span>Size and Weight</span> <dl> <dt>Dimensions</dt> <dd *ngFor="let dim of phone.sizeAndWeight?.dimensions">{{dim}}</dd> <dt>Weight</dt> <dd>{{phone.sizeAndWeight?.weight}}</dd> </dl> </li> <li> <span>Display</span> <dl> <dt>Screen size</dt> <dd>{{phone.display?.screenSize}}</dd> <dt>Screen resolution</dt> <dd>{{phone.display?.screenResolution}}</dd> <dt>Touch screen</dt> <dd>{{phone.display?.touchScreen | checkmark}}</dd> </dl> </li> <li> <span>Hardware</span> <dl> <dt>CPU</dt> <dd>{{phone.hardware?.cpu}}</dd> <dt>USB</dt> <dd>{{phone.hardware?.usb}}</dd> <dt>Audio / headphone jack</dt> <dd>{{phone.hardware?.audioJack}}</dd> <dt>FM Radio</dt> <dd>{{phone.hardware?.fmRadio | checkmark}}</dd> <dt>Accelerometer</dt> <dd>{{phone.hardware?.accelerometer | checkmark}}</dd> </dl> </li> <li> <span>Camera</span> <dl> <dt>Primary</dt> <dd>{{phone.camera?.primary}}</dd> <dt>Features</dt> <dd>{{phone.camera?.features?.join(', ')}}</dd> </dl> </li> <li> <span>Additional Features</span> <dd>{{phone.additionalFeatures}}</dd> </li> </ul> </div> ``` There are several notable changes here: * You've removed the `$ctrl.` prefix from all expressions * You've replaced `ng-src` with property bindings for the standard `src` property * You're using the property binding syntax around `ng-class`. Though Angular does have a [very similar `ngClass`](built-in-directives "Built-in directives | Angular") as AngularJS does, its value is not magically evaluated as an expression. In Angular, you always specify in the template when the value of an attribute is a property expression, as opposed to a literal string. * You've replaced `ng-repeat`s with `*[ngFor](../api/common/ngfor)`s * You've replaced `ng-click` with an event binding for the standard `click` * You've wrapped the whole template in an `[ngIf](../api/common/ngif)` that causes it only to be rendered when there is a phone present. You need this because when the component first loads, you don't have `phone` yet and the expressions will refer to a non-existing value. Unlike in AngularJS, Angular expressions do not fail silently when you try to refer to properties on undefined objects. You need to be explicit about cases where this is expected. Add `PhoneDetailComponent` component to the `[NgModule](../api/core/ngmodule)` *declarations*: ``` import { PhoneDetailComponent } from './phone-detail/phone-detail.component'; @NgModule({ imports: [ BrowserModule, UpgradeModule, HttpClientModule, FormsModule, ], declarations: [ PhoneListComponent, PhoneDetailComponent, ], providers: [ Phone, routeParamsProvider ] }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.documentElement, ['phonecatApp']); } } ``` You should now also remove the phone detail component <script> tag from `index.html`. #### Add the *CheckmarkPipe* The AngularJS directive had a `checkmark` *filter*. Turn that into an Angular **pipe**. There is no upgrade method to convert filters into pipes. You won't miss it. It is easy to turn the filter function into an equivalent Pipe class. The implementation is the same as before, repackaged in the `transform` method. Rename the file to `checkmark.pipe.ts` to conform with Angular conventions: ``` import { Pipe, PipeTransform } from '@angular/core'; @Pipe({name: 'checkmark'}) export class CheckmarkPipe implements PipeTransform { transform(input: boolean) { return input ? '\u2713' : '\u2718'; } } ``` Now import and declare the newly created pipe and remove the filter <script> tag from `index.html`: ``` import { CheckmarkPipe } from './core/checkmark/checkmark.pipe'; @NgModule({ imports: [ BrowserModule, UpgradeModule, HttpClientModule, FormsModule, ], declarations: [ PhoneListComponent, PhoneDetailComponent, CheckmarkPipe ], providers: [ Phone, routeParamsProvider ] }) export class AppModule implements DoBootstrap { constructor(private upgrade: UpgradeModule) { } ngDoBootstrap() { this.upgrade.bootstrap(document.documentElement, ['phonecatApp']); } } ``` ### AOT compile the hybrid app To use AOT with a hybrid app, you have to first set it up like any other Angular application, as shown in the [Ahead-of-time Compilation chapter](aot-compiler "Ahead-of-time (AOT) compilation | Angular"). Then change `main-aot.ts` to bootstrap the `AppComponentFactory` that was generated by the AOT compiler: ``` import { platformBrowser } from '@angular/platform-browser'; import { AppModule } from './app.module'; platformBrowser().bootstrapModule(AppModule); ``` You need to load all the AngularJS files you already use in `index.html` in `aot/index.html` as well: ``` <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <base href="/app/"> <title>Google Phone Gallery</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" /> <link rel="stylesheet" href="app.css" /> <link rel="stylesheet" href="app.animations.css" /> <script src="https://code.jquery.com/jquery-2.2.4.js"></script> <script src="https://code.angularjs.org/1.5.5/angular.js"></script> <script src="https://code.angularjs.org/1.5.5/angular-animate.js"></script> <script src="https://code.angularjs.org/1.5.5/angular-resource.js"></script> <script src="https://code.angularjs.org/1.5.5/angular-route.js"></script> <script src="app.module.ajs.js"></script> <script src="app.config.js"></script> <script src="app.animations.js"></script> <script src="core/core.module.js"></script> <script src="core/phone/phone.module.js"></script> <script src="phone-list/phone-list.module.js"></script> <script src="phone-detail/phone-detail.module.js"></script> <script src="/node_modules/core-js/client/shim.min.js"></script> <script src="/node_modules/zone.js/bundles/zone.umd.min.js"></script> <script>window.module = 'aot';</script> </head> <body> <div class="view-container"> <div ng-view class="view-frame"></div> </div> </body> <script src="/dist/build.js"></script> </html> ``` These files need to be copied together with the polyfills. The files the application needs at runtime, like the `.json` phone lists and images, also need to be copied. Install `fs-extra` using `npm install fs-extra --save-dev` for better file copying, and change `copy-dist-files.js` to the following: ``` var fsExtra = require('fs-extra'); var resources = [ // polyfills 'node_modules/core-js/client/shim.min.js', 'node_modules/zone.js/bundles/zone.umd.min.js', // css 'app/app.css', 'app/app.animations.css', // images and json files 'app/img/', 'app/phones/', // app files 'app/app.module.ajs.js', 'app/app.config.js', 'app/app.animations.js', 'app/core/core.module.js', 'app/core/phone/phone.module.js', 'app/phone-list/phone-list.module.js', 'app/phone-detail/phone-detail.module.js' ]; resources.map(function(sourcePath) { // Need to rename zone.umd.min.js to zone.min.js var destPath = `aot/${sourcePath}`.replace('.umd.min.js', '.min.js'); fsExtra.copySync(sourcePath, destPath); }); ``` And that is all you need to use AOT while upgrading your app! ### Adding The Angular Router And Bootstrap At this point, you've replaced all AngularJS application components with their Angular counterparts, even though you're still serving them from the AngularJS router. #### Add the Angular router Angular has an [all-new router](router "Common Routing Tasks | Angular"). Like all routers, it needs a place in the UI to display routed views. For Angular that is the `<[router-outlet](../api/router/routeroutlet)>` and it belongs in a *root component* at the top of the applications component tree. You don't yet have such a root component, because the application is still managed as an AngularJS app. Create a new `app.component.ts` file with the following `AppComponent` class: ``` import { Component } from '@angular/core'; @Component({ selector: 'phonecat-app', template: '<router-outlet></router-outlet>' }) export class AppComponent { } ``` It has a template that only includes the `<[router-outlet](../api/router/routeroutlet)>`. This component just renders the contents of the active route and nothing else. The selector tells Angular to plug this root component into the `<phonecat-app>` element on the host web page when the application launches. Add this `<phonecat-app>` element to the `index.html`. It replaces the old AngularJS `ng-view` directive: ``` <body> <phonecat-app></phonecat-app> </body> ``` #### Create the *Routing Module* A router needs configuration whether it is the AngularJS or Angular or any other router. The details of Angular router configuration are best left to the [Routing documentation](router "Common Routing Tasks | Angular") which recommends that you create a `[NgModule](../api/core/ngmodule)` dedicated to router configuration (called a *Routing Module*). ``` import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { APP_BASE_HREF, HashLocationStrategy, LocationStrategy } from '@angular/common'; import { PhoneDetailComponent } from './phone-detail/phone-detail.component'; import { PhoneListComponent } from './phone-list/phone-list.component'; const routes: Routes = [ { path: '', redirectTo: 'phones', pathMatch: 'full' }, { path: 'phones', component: PhoneListComponent }, { path: 'phones/:phoneId', component: PhoneDetailComponent } ]; @NgModule({ imports: [ RouterModule.forRoot(routes) ], exports: [ RouterModule ], providers: [ { provide: APP_BASE_HREF, useValue: '!' }, { provide: LocationStrategy, useClass: HashLocationStrategy }, ] }) export class AppRoutingModule { } ``` This module defines a `routes` object with two routes to the two phone components and a default route for the empty path. It passes the `routes` to the `RouterModule.forRoot` method which does the rest. A couple of extra providers enable routing with "hash" URLs such as `#!/phones` instead of the default "push state" strategy. Now update the `AppModule` to import this `AppRoutingModule` and also the declare the root `AppComponent` as the bootstrap component. That tells Angular that it should bootstrap the application with the *root* `AppComponent` and insert its view into the host web page. You must also remove the bootstrap of the AngularJS module from `ngDoBootstrap()` in `app.module.ts` and the `[UpgradeModule](../api/upgrade/static/upgrademodule)` import. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { HttpClientModule } from '@angular/common/http'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { CheckmarkPipe } from './core/checkmark/checkmark.pipe'; import { Phone } from './core/phone/phone.service'; import { PhoneDetailComponent } from './phone-detail/phone-detail.component'; import { PhoneListComponent } from './phone-list/phone-list.component'; @NgModule({ imports: [ BrowserModule, FormsModule, HttpClientModule, AppRoutingModule ], declarations: [ AppComponent, PhoneListComponent, CheckmarkPipe, PhoneDetailComponent ], providers: [ Phone ], bootstrap: [ AppComponent ] }) export class AppModule {} ``` And since you are routing to `PhoneListComponent` and `PhoneDetailComponent` directly rather than using a route template with a `<phone-list>` or `<phone-detail>` tag, you can do away with their Angular selectors as well. #### Generate links for each phone You no longer have to hardcode the links to phone details in the phone list. You can generate data bindings for the `id` of each phone to the `[routerLink](../api/router/routerlink)` directive and let that directive construct the appropriate URL to the `PhoneDetailComponent`: ``` <ul class="phones"> <li *ngFor="let phone of getPhones()" class="thumbnail phone-list-item"> <a [routerLink]="['/phones', phone.id]" class="thumb"> <img [src]="phone.imageUrl" [alt]="phone.name" /> </a> <a [routerLink]="['/phones', phone.id]" class="name">{{phone.name}}</a> <p>{{phone.snippet}}</p> </li> </ul> ``` > See the [Routing](router "Common Routing Tasks | Angular") page for details. > > #### Use route parameters The Angular router passes route parameters differently. Correct the `PhoneDetail` component constructor to expect an injected `[ActivatedRoute](../api/router/activatedroute)` object. Extract the `phoneId` from the `ActivatedRoute.snapshot.params` and fetch the phone data as before: ``` import { Component } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { Phone, PhoneData } from '../core/phone/phone.service'; @Component({ selector: 'phone-detail', templateUrl: './phone-detail.template.html' }) export class PhoneDetailComponent { phone: PhoneData; mainImageUrl: string; constructor(activatedRoute: ActivatedRoute, phone: Phone) { phone.get(activatedRoute.snapshot.paramMap.get('phoneId')) .subscribe((p: PhoneData) => { this.phone = p; this.setImage(p.images[0]); }); } setImage(imageUrl: string) { this.mainImageUrl = imageUrl; } } ``` You are now running a pure Angular application! ### Say Goodbye to AngularJS It is time to take off the training wheels and let the application begin its new life as a pure, shiny Angular app. The remaining tasks all have to do with removing code - which of course is every programmer's favorite task! The application is still bootstrapped as a hybrid app. There is no need for that anymore. Switch the bootstrap method of the application from the `[UpgradeModule](../api/upgrade/static/upgrademodule)` to the Angular way. ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app.module'; platformBrowserDynamic().bootstrapModule(AppModule); ``` If you haven't already, remove all references to the `[UpgradeModule](../api/upgrade/static/upgrademodule)` from `app.module.ts`, as well as any [factory provider](upgrade#making-angularjs-dependencies-injectable-to-angular "Making AngularJS Dependencies Injectable to Angular - Upgrading from AngularJS to Angular | Angular") for AngularJS services, and the `app/ajs-upgraded-providers.ts` file. Also remove any `[downgradeInjectable](../api/upgrade/static/downgradeinjectable)()` or `[downgradeComponent](../api/upgrade/static/downgradecomponent)()` you find, together with the associated AngularJS factory or directive declarations. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { HttpClientModule } from '@angular/common/http'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { CheckmarkPipe } from './core/checkmark/checkmark.pipe'; import { Phone } from './core/phone/phone.service'; import { PhoneDetailComponent } from './phone-detail/phone-detail.component'; import { PhoneListComponent } from './phone-list/phone-list.component'; @NgModule({ imports: [ BrowserModule, FormsModule, HttpClientModule, AppRoutingModule ], declarations: [ AppComponent, PhoneListComponent, CheckmarkPipe, PhoneDetailComponent ], providers: [ Phone ], bootstrap: [ AppComponent ] }) export class AppModule {} ``` You may also completely remove the following files. They are AngularJS module configuration files and not needed in Angular: * `app/app.module.ajs.ts` * `app/app.config.ts` * `app/core/core.module.ts` * `app/core/phone/phone.module.ts` * `app/phone-detail/phone-detail.module.ts` * `app/phone-list/phone-list.module.ts` The external typings for AngularJS may be uninstalled as well. The only ones you still need are for Jasmine and Angular polyfills. The `@angular/upgrade` package and its mapping in `systemjs.config.js` can also go. ``` npm uninstall @angular/upgrade --save npm uninstall @types/angular @types/angular-animate @types/angular-cookies @types/angular-mocks @types/angular-resource @types/angular-route @types/angular-sanitize --save-dev ``` Finally, from `index.html`, remove all references to AngularJS scripts and jQuery. When you're done, this is what it should look like: ``` <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <base href="/app/"> <title>Google Phone Gallery</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" /> <link rel="stylesheet" href="app.css" /> <script src="/node_modules/core-js/client/shim.min.js"></script> <script src="/node_modules/zone.js/bundles/zone.umd.js"></script> <script src="/node_modules/systemjs/dist/system.src.js"></script> <script src="/systemjs.config.js"></script> <script> System.import('/app'); </script> </head> <body> <phonecat-app></phonecat-app> </body> </html> ``` That is the last you'll see of AngularJS! It has served us well but now it is time to say goodbye. Appendix: Upgrading PhoneCat Tests ---------------------------------- Tests can not only be retained through an upgrade process, but they can also be used as a valuable safety measure when ensuring that the application does not break during the upgrade. E2E tests are especially useful for this purpose. ### E2E Tests The PhoneCat project has both E2E Protractor tests and some Karma unit tests in it. Of these two, E2E tests can be dealt with much more easily: By definition, E2E tests access the application from the *outside* by interacting with the various UI elements the application puts on the screen. E2E tests aren't really that concerned with the internal structure of the application components. That also means that, although you modify the project quite a bit during the upgrade, the E2E test suite should keep passing with just minor modifications. You didn't change how the application behaves from the user's point of view. During TypeScript conversion, there is nothing to do to keep E2E tests working. But when you change the bootstrap to that of a Hybrid app, you must make a few changes. Update the `protractor-conf.js` to sync with hybrid applications: ``` ng12Hybrid: true ``` When you start to upgrade components and their templates to Angular, you'll make more changes because the E2E tests have matchers that are specific to AngularJS. For PhoneCat you need to make the following changes in order to make things work with Angular: | Previous code | New code | Details | | --- | --- | --- | | `by.repeater('phone in $ctrl.phones').column('phone.name')` | `by.css('.phones .name')` | The repeater matcher relies on AngularJS `ng-repeat` | | `by.repeater('phone in $ctrl.phones')` | `by.css('.phones li')` | The repeater matcher relies on AngularJS `ng-repeat` | | `by.model('$ctrl.query')` | `by.css('input')` | The model matcher relies on AngularJS `ng-model` | | `by.model('$ctrl.orderProp')` | `by.css('select')` | The model matcher relies on AngularJS `ng-model` | | `by.binding('$ctrl.phone.name')` | `by.css('h1')` | The binding matcher relies on AngularJS data binding | When the bootstrap method is switched from that of `[UpgradeModule](../api/upgrade/static/upgrademodule)` to pure Angular, AngularJS ceases to exist on the page completely. At this point, you need to tell Protractor that it should not be looking for an AngularJS application anymore, but instead it should find *Angular apps* from the page. Replace the `ng12Hybrid` previously added with the following in `protractor-conf.js`: ``` useAllAngular2AppRoots: true, ``` Also, there are a couple of Protractor API calls in the PhoneCat test code that are using the AngularJS `$location` service under the hood. As that service is no longer present after the upgrade, replace those calls with ones that use the generic URL APIs of WebDriver instead. The first of these is the redirection spec: ``` it('should redirect `index.html` to `index.html#!/phones', async () => { await browser.get('index.html'); await browser.waitForAngular(); const url = await browser.getCurrentUrl(); expect(url.endsWith('/phones')).toBe(true); }); ``` And the second is the phone links spec: ``` it('should render phone specific links', async () => { const query = element(by.css('input')); await query.sendKeys('nexus'); await element.all(by.css('.phones li a')).first().click(); const url = await browser.getCurrentUrl(); expect(url.endsWith('/phones/nexus-s')).toBe(true); }); ``` ### Unit Tests For unit tests, on the other hand, more conversion work is needed. Effectively they need to be *upgraded* along with the production code. During TypeScript conversion no changes are strictly necessary. But it may bea good idea to convert the unit test code into TypeScript as well. For instance, in the phone detail component spec, you can use ES2015 features like arrow functions and block-scoped variables and benefit from the type definitions of the AngularJS services you're consuming: ``` describe('phoneDetail', () => { // Load the module that contains the `phoneDetail` component before each test beforeEach(angular.mock.module('phoneDetail')); // Test the controller describe('PhoneDetailController', () => { let $httpBackend: angular.IHttpBackendService; let ctrl: any; const xyzPhoneData = { name: 'phone xyz', images: ['image/url1.png', 'image/url2.png'] }; beforeEach(inject(($componentController: any, _$httpBackend_: angular.IHttpBackendService, $routeParams: angular.route.IRouteParamsService) => { $httpBackend = _$httpBackend_; $httpBackend.expectGET('phones/xyz.json').respond(xyzPhoneData); $routeParams.phoneId = 'xyz'; ctrl = $componentController('phoneDetail'); })); it('should fetch the phone details', () => { jasmine.addCustomEqualityTester(angular.equals); expect(ctrl.phone).toEqual({}); $httpBackend.flush(); expect(ctrl.phone).toEqual(xyzPhoneData); }); }); }); ``` Once you start the upgrade process and bring in SystemJS, configuration changes are needed for Karma. You need to let SystemJS load all the new Angular code, which can be done with the following kind of shim file: ``` // /*global jasmine, __karma__, window*/ Error.stackTraceLimit = 0; // "No stacktrace"" is usually best for app testing. // Uncomment to get full stacktrace output. Sometimes helpful, usually not. // Error.stackTraceLimit = Infinity; // jasmine.DEFAULT_TIMEOUT_INTERVAL = 1000; var builtPath = '/base/app/'; __karma__.loaded = function () { }; function isJsFile(path) { return path.slice(-3) == '.js'; } function isSpecFile(path) { return /\.spec\.(.*\.)?js$/.test(path); } function isBuiltFile(path) { return isJsFile(path) && (path.slice(0, builtPath.length) == builtPath); } var allSpecFiles = Object.keys(window.__karma__.files) .filter(isSpecFile) .filter(isBuiltFile); System.config({ baseURL: '/base', // Extend usual application package list with test folder packages: { 'testing': { main: 'index.js', defaultExtension: 'js' } }, // Assume npm: is set in `paths` in systemjs.config // Map the angular testing bundles map: { '@angular/core/testing': 'npm:@angular/core/fesm2015/testing.mjs', '@angular/common/testing': 'npm:@angular/common/fesm2015/testing.mjs', '@angular/common/http/testing': 'npm:@angular/common/fesm2015/http/testing.mjs', '@angular/compiler/testing': 'npm:@angular/compiler/fesm2015/testing.mjs', '@angular/platform-browser/testing': 'npm:@angular/platform-browser/fesm2015/testing.mjs', '@angular/platform-browser-dynamic/testing': 'npm:@angular/platform-browser-dynamic/fesm2015/testing.mjs', '@angular/router/testing': 'npm:@angular/router/fesm2015/testing.mjs', '@angular/forms/testing': 'npm:@angular/forms/fesm2015/testing.mjs', }, }); System.import('systemjs.config.js') .then(importSystemJsExtras) .then(initTestBed) .then(initTesting); /** Optional SystemJS configuration extras. Keep going w/o it */ function importSystemJsExtras(){ return System.import('systemjs.config.extras.js') .catch(function(reason) { console.log( 'Warning: System.import could not load the optional "systemjs.config.extras.js". Did you omit it by accident? Continuing without it.' ); console.log(reason); }); } function initTestBed() { return Promise.all([ System.import('@angular/core/testing'), System.import('@angular/platform-browser-dynamic/testing') ]) .then(function (providers) { var coreTesting = providers[0]; var browserTesting = providers[1]; coreTesting.TestBed.initTestEnvironment( browserTesting.BrowserDynamicTestingModule, browserTesting.platformBrowserDynamicTesting()); }) } // Import all spec files and start karma function initTesting() { return Promise.all( allSpecFiles.map(function (moduleName) { return System.import(moduleName); }) ) .then(__karma__.start, __karma__.error); } ``` The shim first loads the SystemJS configuration, then the test the support libraries of Angular, and then the spec files of the application themselves. Karma configuration should then be changed so that it uses the application root dir as the base directory, instead of `app`. ``` basePath: './', ``` Once done, you can load SystemJS and other dependencies, and also switch the configuration for loading application files so that they are *not* included to the page by Karma. You'll let the shim and SystemJS load them. ``` // System.js for module loading 'node_modules/systemjs/dist/system.src.js', // Polyfills 'node_modules/core-js/client/shim.js', // zone.js 'node_modules/zone.js/bundles/zone.umd.js', 'node_modules/zone.js/bundles/zone-testing.umd.js', // RxJs. { pattern: 'node_modules/rxjs/**/*.js', included: false, watched: false }, { pattern: 'node_modules/rxjs/**/*.js.map', included: false, watched: false }, // Angular itself and the testing library { pattern: 'node_modules/@angular/**/*.mjs', included: false, watched: false }, { pattern: 'node_modules/@angular/**/*.mjs.map', included: false, watched: false }, { pattern: 'node_modules/tslib/tslib.js', included: false, watched: false }, { pattern: 'node_modules/systemjs-plugin-babel/**/*.js', included: false, watched: false }, {pattern: 'systemjs.config.js', included: false, watched: false}, 'karma-test-shim.js', {pattern: 'app/**/*.module.js', included: false, watched: true}, {pattern: 'app/*!(.module|.spec).js', included: false, watched: true}, {pattern: 'app/!(bower_components)/**/*!(.module|.spec).js', included: false, watched: true}, {pattern: 'app/**/*.spec.js', included: false, watched: true}, {pattern: '**/*.html', included: false, watched: true}, ``` Since the HTML templates of Angular components will be loaded as well, you must help Karma out a bit so that it can route them to the right paths: ``` // proxied base paths for loading assets proxies: { // required for component assets fetched by Angular's compiler '/phone-detail': '/base/app/phone-detail', '/phone-list': '/base/app/phone-list' }, ``` The unit test files themselves also need to be switched to Angular when their production counterparts are switched. The specs for the checkmark pipe are probably the most straightforward, as the pipe has no dependencies: ``` import { CheckmarkPipe } from './checkmark.pipe'; describe('CheckmarkPipe', () => { it('should convert boolean values to unicode checkmark or cross', () => { const checkmarkPipe = new CheckmarkPipe(); expect(checkmarkPipe.transform(true)).toBe('\u2713'); expect(checkmarkPipe.transform(false)).toBe('\u2718'); }); }); ``` The unit test for the phone service is a bit more involved. You need to switch from the mocked-out AngularJS `$httpBackend` to a mocked-out Angular Http backend. ``` import { inject, TestBed } from '@angular/core/testing'; import { HttpClientTestingModule, HttpTestingController } from '@angular/common/http/testing'; import { Phone, PhoneData } from './phone.service'; describe('Phone', () => { let phone: Phone; const phonesData: PhoneData[] = [ {name: 'Phone X', snippet: '', images: []}, {name: 'Phone Y', snippet: '', images: []}, {name: 'Phone Z', snippet: '', images: []} ]; let httpMock: HttpTestingController; beforeEach(() => { TestBed.configureTestingModule({ imports: [ HttpClientTestingModule ], providers: [ Phone, ] }); }); beforeEach(inject([HttpTestingController, Phone], (_httpMock_: HttpTestingController, _phone_: Phone) => { httpMock = _httpMock_; phone = _phone_; })); afterEach(() => { httpMock.verify(); }); it('should fetch the phones data from `/phones/phones.json`', () => { phone.query().subscribe(result => { expect(result).toEqual(phonesData); }); const req = httpMock.expectOne(`/phones/phones.json`); req.flush(phonesData); }); }); ``` For the component specs, you can mock out the `Phone` service itself, and have it provide canned phone data. You use the component unit testing APIs of Angular for both components. ``` import { TestBed, waitForAsync } from '@angular/core/testing'; import { ActivatedRoute } from '@angular/router'; import { Observable, of } from 'rxjs'; import { PhoneDetailComponent } from './phone-detail.component'; import { Phone, PhoneData } from '../core/phone/phone.service'; import { CheckmarkPipe } from '../core/checkmark/checkmark.pipe'; function xyzPhoneData(): PhoneData { return {name: 'phone xyz', snippet: '', images: ['image/url1.png', 'image/url2.png']}; } class MockPhone { get(id: string): Observable<PhoneData> { return of(xyzPhoneData()); } } class ActivatedRouteMock { constructor(public snapshot: any) {} } describe('PhoneDetailComponent', () => { beforeEach(waitForAsync(() => { TestBed.configureTestingModule({ declarations: [ CheckmarkPipe, PhoneDetailComponent ], providers: [ { provide: Phone, useClass: MockPhone }, { provide: ActivatedRoute, useValue: new ActivatedRouteMock({ params: { phoneId: 1 } }) } ] }) .compileComponents(); })); it('should fetch phone detail', () => { const fixture = TestBed.createComponent(PhoneDetailComponent); fixture.detectChanges(); const compiled = fixture.debugElement.nativeElement; expect(compiled.querySelector('h1').textContent).toContain(xyzPhoneData().name); }); }); ``` ``` import {SpyLocation} from '@angular/common/testing'; import {NO_ERRORS_SCHEMA} from '@angular/core'; import {ComponentFixture, TestBed, waitForAsync} from '@angular/core/testing'; import {ActivatedRoute} from '@angular/router'; import {Observable, of} from 'rxjs'; import {Phone, PhoneData} from '../core/phone/phone.service'; import {PhoneListComponent} from './phone-list.component'; class ActivatedRouteMock { constructor(public snapshot: any) {} } class MockPhone { query(): Observable<PhoneData[]> { return of([ {name: 'Nexus S', snippet: '', images: []}, {name: 'Motorola DROID', snippet: '', images: []} ]); } } let fixture: ComponentFixture<PhoneListComponent>; describe('PhoneList', () => { beforeEach(waitForAsync(() => { TestBed .configureTestingModule({ declarations: [PhoneListComponent], providers: [ {provide: ActivatedRoute, useValue: new ActivatedRouteMock({params: {phoneId: 1}})}, {provide: Location, useClass: SpyLocation}, {provide: Phone, useClass: MockPhone}, ], schemas: [NO_ERRORS_SCHEMA] }) .compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(PhoneListComponent); }); it('should create "phones" model with 2 phones fetched from xhr', () => { fixture.detectChanges(); const compiled = fixture.debugElement.nativeElement; expect(compiled.querySelectorAll('.phone-list-item').length).toBe(2); expect(compiled.querySelector('.phone-list-item:nth-child(1)').textContent) .toContain('Motorola DROID'); expect(compiled.querySelector('.phone-list-item:nth-child(2)').textContent) .toContain('Nexus S'); }); xit('should set the default value of orderProp model', () => { fixture.detectChanges(); const compiled = fixture.debugElement.nativeElement; expect(compiled.querySelector('select option:last-child').selected).toBe(true); }); }); ``` Finally, revisit both of the component tests when you switch to the Angular router. For the details component, provide a mock of Angular `[ActivatedRoute](../api/router/activatedroute)` object instead of using the AngularJS `$routeParams`. ``` import { TestBed, waitForAsync } from '@angular/core/testing'; import { ActivatedRoute } from '@angular/router'; /* . . . */ class ActivatedRouteMock { constructor(public snapshot: any) {} } /* . . . */ beforeEach(waitForAsync(() => { TestBed.configureTestingModule({ declarations: [ CheckmarkPipe, PhoneDetailComponent ], providers: [ { provide: Phone, useClass: MockPhone }, { provide: ActivatedRoute, useValue: new ActivatedRouteMock({ params: { phoneId: 1 } }) } ] }) .compileComponents(); })); ``` And for the phone list component, a few adjustments to the router make the `RouteLink` directives work. ``` import {SpyLocation} from '@angular/common/testing'; import {NO_ERRORS_SCHEMA} from '@angular/core'; import {ComponentFixture, TestBed, waitForAsync} from '@angular/core/testing'; import {ActivatedRoute} from '@angular/router'; import {Observable, of} from 'rxjs'; import {Phone, PhoneData} from '../core/phone/phone.service'; import {PhoneListComponent} from './phone-list.component'; /* . . . */ beforeEach(waitForAsync(() => { TestBed .configureTestingModule({ declarations: [PhoneListComponent], providers: [ {provide: ActivatedRoute, useValue: new ActivatedRouteMock({params: {phoneId: 1}})}, {provide: Location, useClass: SpyLocation}, {provide: Phone, useClass: MockPhone}, ], schemas: [NO_ERRORS_SCHEMA] }) .compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(PhoneListComponent); }); ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular Angular Internationalization Angular Internationalization ============================ *Internationalization*, sometimes referenced as i18n, is the process of designing and preparing your project for use in different locales around the world. *Localization* is the process of building versions of your project for different locales. The localization process includes the following actions. * Extract text for translation into different languages * Format data for a specific locale A *locale* identifies a region in which people speak a particular language or language variant. Possible regions includes countries and geographical regions. A locale determines the formatting and parsing of the following details. * Measurement units including date and time, numbers, and currencies * Translated names including time zones, languages, and countries For a quick introduction to localization and internationalization watch this video: Learn about Angular internationalization ---------------------------------------- [Common tasks Learn how to implement many of the common tasks associated with Angular internationalization. Common internationalization tasks](i18n-common-overview "Common internationalization tasks") [Optional practices Learn how to implement optional practices associated with Angular internationalization. Optional internationalization practices](i18n-optional-overview "Optional internationalization tasks") [Internationalization example Review an example of Angular internationalization. Example Angular Internationalization application](i18n-example "Internationalization example") Last reviewed on Mon Jun 06 2022 angular Glossary Glossary ======== Angular has its own vocabulary. Most Angular terms are common English words or computing terms that have a specific meaning within the Angular system. This glossary lists the most prominent terms and a few less familiar ones with unusual or unexpected definitions. [A](glossary#ahead-of-time-aot-compilation "A - Glossary | Angular") [B](glossary#binding "B - Glossary | Angular") [C](glossary#case-types "C - Glossary | Angular") [D](glossary#data-binding "D - Glossary | Angular") [E](glossary#eager-loading "E - Glossary | Angular") [F](glossary#form-control "F - Glossary | Angular") [G](glossary#immutability "G - Glossary | Angular") [H](glossary#immutability "H - Glossary | Angular") [I](glossary#immutability "I - Glossary | Angular") [J](glossary#javascript "J - Glossary | Angular") [K](glossary#lazy-loading "K - Glossary | Angular") [L](glossary#lazy-loading "L - Glossary | Angular") [M](glossary#module "M - Glossary | Angular") [N](glossary#ngcc "N - Glossary | Angular") [O](glossary#observable "O - Glossary | Angular") [P](glossary#pipe "P - Glossary | Angular") [Q](glossary#reactive-forms "Q - Glossary | Angular") [R](glossary#reactive-forms "R - Glossary | Angular") [S](glossary#schematic "S - Glossary | Angular") [T](glossary#target "T - Glossary | Angular") [U](glossary#unidirectional-data-flow "U - Glossary | Angular") [V](glossary#view "V - Glossary | Angular") [W](glossary#web-component "W - Glossary | Angular") [X](glossary#zone "X - Glossary | Angular") [Y](glossary#zone "Y - Glossary | Angular") [Z](glossary#zone "Z - Glossary | Angular") ahead-of-time (AOT) compilation ------------------------------- The Angular ahead-of-time (AOT) compiler converts Angular HTML and TypeScript code into efficient JavaScript code during the build phase. The build phase occurs before the browser downloads and runs the rendered code. This is the best compilation mode for production environments, with decreased load time and increased performance compared to [just-in-time (JIT) compilation](glossary#just-in-time-jit-compilation "just-in-time (JIT) compilation - Glossary | Angular"). By compiling your application using the `ngc` command-line tool, you can bootstrap directly to a module factory, so you do not need to include the Angular compiler in your JavaScript bundle. Angular element --------------- An Angular [component](glossary#component "component - Glossary | Angular") packaged as a [custom element](glossary#custom-element "custom element - Glossary | Angular"). Learn more in [Angular Elements Overview](elements "Angular elements overview | Angular"). Angular package format (APF) ---------------------------- An Angular specific specification for layout of npm packages that is used by all first-party Angular packages, and most third-party Angular libraries. Learn more in the [Angular Package Format specification](angular-package-format "Angular Package Format | Angular"). annotation ---------- A structure that provides metadata for a class. To learn more, see [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular"). app-shell --------- App shell is a way to render a portion of your application using a route at build time. This gives users a meaningful first paint of your application that appears quickly because the browser can render static HTML and CSS without the need to initialize JavaScript. To learn more, see [The App Shell Model](https://developers.google.com/web/fundamentals/architecture/app-shell "The App Shell Model | Web Fundamentals | Google Developers"). You can use the Angular CLI to [generate](cli/generate#app-shell "app-shell - ng generate | CLI | Angular") an app shell. This can improve the user experience by quickly launching a static rendered page while the browser downloads the full client version and switches to it automatically after the code loads. A static rendered page is a skeleton common to all pages. To learn more, see [Service Worker and PWA](service-worker-intro "Angular service worker introduction | Angular"). Architect --------- The tool that the Angular CLI uses to perform complex tasks such as compilation and test running, according to a provided configuration. Architect is a shell that runs a [builder](glossary#builder "builder - Glossary | Angular") with a given [target configuration](glossary#target "target - Glossary | Angular"). The [builder](glossary#builder "builder - Glossary | Angular") is defined in an [npm package](glossary#npm-package "npm package - Glossary | Angular"). In the [workspace configuration file](workspace-config#project-tool-configuration-options "Project tool configuration options - Angular workspace configuration | Angular"), an "architect" section provides configuration options for Architect builders. For example, a built-in builder for linting is defined in the package `@angular-devkit/build_angular:tslint`, which uses the [TSLint](https://palantir.github.io/tslint "TSLint | Palantir | GitHub") tool to perform linting, with a configuration specified in a `tslint.json` file. Use the [`ng run`](cli/run "ng run | CLI | Angular") Angular CLI command to invoke a builder by specifying a [target configuration](glossary#target "target - Glossary | Angular") associated with that builder. Integrators can add builders to enable tools and workflows to run through the Angular CLI. For example, a custom builder can replace the third-party tools used by the built-in implementations for Angular CLI commands, such as `ng build` or `ng test`. attribute directive ------------------- A category of [directive](glossary#directive "directive - Glossary | Angular") that can listen to and modify the behavior of other HTML elements, attributes, properties, and components. They are usually represented as HTML attributes, hence the name. Learn more in [Attribute Directives](attribute-directives "Attribute directives | Angular"). binding ------- Generally, the practice of setting a variable or property to a data value. Within Angular, typically refers to [data binding](glossary#data-binding "data binding - Glossary | Angular"), which coordinates DOM object properties with data object properties. Sometimes refers to a [dependency-injection](glossary#dependency-injection-di "dependency injection (DI) - Glossary | Angular") binding between a [token](glossary#token "token - Glossary | Angular") and a dependency [provider](glossary#provider "provider - Glossary | Angular"). bootstrap --------- A way to initialize and launch an application or system. In Angular, the `AppModule` root NgModule of an application has a `bootstrap` property that identifies the top-level [components](glossary#component "component - Glossary | Angular") of the application. During the bootstrap process, Angular creates and inserts these components into the `index.html` host web page. You can bootstrap multiple applications in the same `index.html`. Each application contains its own components. Learn more in [Bootstrapping](bootstrapping "Launching your app with a root module | Angular"). builder ------- A function that uses the [Architect](glossary#architect "Architect - Glossary | Angular") API to perform a complex process such as `build` or `test`. The builder code is defined in an [npm package](glossary#npm-package "npm package - Glossary | Angular"). For example, [BrowserBuilder](https://github.com/angular/angular-cli/tree/main/packages/angular_devkit/build_angular/src/builders/browser "packages/angular_devkit/build_angular/src/builders/browser | angular/angular-cli | GitHub") runs a [webpack](https://webpack.js.org "webpack | JS.ORG") build for a browser target and [KarmaBuilder](https://github.com/angular/angular-cli/tree/main/packages/angular_devkit/build_angular/src/builders/karma "packages/angular_devkit/build_angular/src/builders/karma | angular/angular-cli | GitHub") starts the Karma server and runs a webpack build for unit tests. The [`ng run`](cli/run "ng run | CLI | Angular") Angular CLI command invokes a builder with a specific [target configuration](glossary#target "target - Glossary | Angular"). The [workspace configuration](workspace-config "Angular workspace configuration | Angular") file, `angular.json`, contains default configurations for built-in builders. case types ---------- Angular uses capitalization conventions to distinguish the names of various types, as described in the [naming guidelines section](styleguide#02-01 "Style 02-01 - Angular coding style guide | Angular") of the Style Guide. Here is a summary of the case types: | | Details | example | | --- | --- | --- | | camelCase | Symbols, properties, methods, pipe names, non-component directive selectors, constants. Standard or lower camel case uses lowercase on the first letter of the item. | `selectedHero` | | UpperCamelCase PascalCase | Class names, including classes that define components, interfaces, NgModules, directives, and pipes. Upper camel case uses uppercase on the first letter of the item. | `HeroComponent` | | dash-case kebab-case | Descriptive part of file names, component selectors. | `app-hero-list` | | underscore\_case snake\_case | Not typically used in Angular. Snake case uses words connected with underscores. | `convert_link_mode` | | UPPER\_UNDERSCORE\_CASE UPPER\_SNAKE\_CASE SCREAMING\_SNAKE\_CASE | Traditional for constants. This case is acceptable, but camelCase is preferred. Upper snake case uses words in all capital letters connected with underscores. | `FIX_ME` | change detection ---------------- The mechanism by which the Angular framework synchronizes the state of the UI of an application with the state of the data. The change detector checks the current state of the data model whenever it runs, and maintains it as the previous state to compare on the next iteration. As the application logic updates component data, values that are bound to DOM properties in the view can change. The change detector is responsible for updating the view to reflect the current data model. Similarly, the user can interact with the UI, causing events that change the state of the data model. These events can trigger change detection. Using the default change-detection strategy, the change detector goes through the [view hierarchy](glossary#view-hierarchy "view hierarchy - Glossary | Angular") on each VM turn to check every [data-bound property](glossary#data-binding "data binding - Glossary | Angular") in the template. In the first phase, it compares the current state of the dependent data with the previous state, and collects changes. In the second phase, it updates the page DOM to reflect any new data values. If you set the `OnPush` change-detection strategy, the change detector runs only when [explicitly invoked](../api/core/changedetectorref "ChangeDetectorRef | @angular/core - API | Angular"), or when it is triggered by an `[Input](../api/core/input)` reference change or event handler. This typically improves performance. To learn more, see [Optimize the change detection in Angular](https://web.dev/faster-angular-change-detection "Optimize Angular's change detection | web.dev"). class decorator --------------- A [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular") that appears immediately before a class definition, which declares the class to be of the given type, and provides metadata suitable to the type. The following decorators can declare Angular class types. * `@[Component](../api/core/component)()` * `@[Directive](../api/core/directive)()` * `@[Pipe](../api/core/pipe)()` * `@[Injectable](../api/core/injectable)()` * `@[NgModule](../api/core/ngmodule)()` class field decorator --------------------- A [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular") statement immediately before a field in a class definition that declares the type of that field. Some examples are `@[Input](../api/core/input)` and `@[Output](../api/core/output)`. collection ---------- In Angular, a set of related [schematics](glossary#schematic "schematic - Glossary | Angular") collected in an [npm package](glossary#npm-package "npm package - Glossary | Angular"). command-line interface (CLI) ---------------------------- The [Angular CLI](cli "CLI Overview and Command Reference | Angular") is a command-line tool for managing the Angular development cycle. Use it to create the initial filesystem scaffolding for a [workspace](glossary#workspace "workspace - Glossary | Angular") or [project](glossary#project "project - Glossary | Angular"), and to run [schematics](glossary#schematic "schematic - Glossary | Angular") that add and modify code for initial generic versions of various elements. The Angular CLI supports all stages of the development cycle, including building, testing, bundling, and deployment. * To begin using the Angular CLI for a new project, see [Local Environment Setup](setup-local "Setting up the local environment and workspace | Angular"). * To learn more about the full capabilities of the Angular CLI, see the [Angular CLI command reference](cli "CLI Overview and Command Reference | Angular"). See also [Schematics CLI](glossary#schematics-cli "Schematics CLI - Glossary | Angular"). component --------- A class with the `@[Component](../api/core/component)()` [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular") that associates it with a companion [template](glossary#template "template - Glossary | Angular"). Together, the component class and template define a [view](glossary#view "view - Glossary | Angular"). A component is a special type of [directive](glossary#directive "directive - Glossary | Angular"). The `@[Component](../api/core/component)()` decorator extends the `@[Directive](../api/core/directive)()` decorator with template-oriented features. An Angular component class is responsible for exposing data and handling most of the display and user-interaction logic of the view through [data binding](glossary#data-binding "data binding - Glossary | Angular"). Read more about component classes, templates, and views in [Introduction to Angular concepts](architecture "Introduction to Angular concepts | Angular"). configuration ------------- See [workspace configuration](glossary#workspace-configuration "workspace configuration - Glossary | Angular") content projection ------------------ A way to insert DOM content from outside a component into the view of the component in a designated spot. To learn more, see [Responding to changes in content](lifecycle-hooks#responding-to-projected-content-changes "Responding to projected content changes - Lifecycle Hooks | Angular"). custom element -------------- A web platform feature, currently supported by most browsers and available in other browsers through polyfills. See [Browser support](browser-support "Browser support | Angular"). The custom element feature extends HTML by allowing you to define a tag whose content is created and controlled by JavaScript code. A custom element is recognized by a browser when it is added to the [CustomElementRegistry](https://developer.mozilla.org/docs/Web/API/CustomElementRegistry "CustomElementRegistry | MDN"). A custom element is also referenced as a *web component*. You can use the API to transform an Angular component so that it can be registered with the browser and used in any HTML that you add directly to the DOM within an Angular application. The custom element tag inserts the view of the component, with change-detection and data-binding functionality, into content that would otherwise be displayed without Angular processing. See [Angular element](glossary#angular-element "Angular element - Glossary | Angular"). See also [dynamic component loading](glossary#dynamic-component-loading "dynamic component loading - Glossary | Angular"). data binding ------------ A process that allows applications to display data values to a user and respond to user actions. User actions include clicks, touches, keystrokes, and so on. In data binding, you declare the relationship between an HTML widget and a data source and let the framework handle the details. Data binding is an alternative to manually pushing application data values into HTML, attaching event listeners, pulling changed values from the screen, and updating application data values. Read about the following forms of binding of the [Template Syntax](template-syntax "Template syntax | Angular") in Angular: * [Interpolation](interpolation "Text interpolation | Angular") * [Property binding](property-binding "Property binding | Angular") * [Event binding](event-binding "Event binding | Angular") * [Attribute binding](attribute-binding "Attribute binding | Angular") * [Class and style binding](class-binding "Class and style binding | Angular") * [Two-way data binding with ngModel](built-in-directives#displaying-and-updating-properties-with-ngmodel "Displaying and updating properties with ngModel - Built-in directives | Angular") declarable ---------- A class that you can add to the `declarations` list of an [NgModule](glossary#ngmodule "NgModule - Glossary | Angular"). You can declare [components](glossary#component "component - Glossary | Angular"), [directives](glossary#directive "directive - Glossary | Angular"), and [pipes](glossary#pipe "pipe - Glossary | Angular"), unless they have the `standalone` flag in their decorators set to `true`, which makes them standalone. Note: standalone components/directives/pipes are **not** declarables. More info about standalone classes can be found [below](glossary#standalone "standalone - Glossary | Angular"). Do not declare the following: * A class already declared as [standalone](glossary#standalone "standalone - Glossary | Angular"). * A class that is already declared in another NgModule. * An array of directives imported from another package. For example, do not declare `FORMS_DIRECTIVES` from `@angular/forms`. * NgModule classes. * Service classes. * Non-Angular classes and objects, such as strings, numbers, functions, entity models, configurations, business logic, and helper classes. Note that declarables can also be declared as standalone and simply be imported inside other standalone components or existing NgModules, to learn more, see the [Standalone components guide](standalone-components "Getting started with standalone components | Angular"). decorator | decoration ---------------------- A function that modifies a class or property definition. Decorators are an experimental (stage 3) [JavaScript language feature](https://github.com/tc39/proposal-decorators "tc39/proposal-decorators | GitHub"). A decorator is also referenced as an *annotation*. TypeScript adds support for decorators. Angular defines decorators that attach metadata to classes or properties so that it knows what those classes or properties mean and how they should work. To learn more, see [class decorator](glossary#class-decorator "class decorator - Glossary | Angular"). See also [class field decorator](glossary#class-field-decorator "class field decorator - Glossary | Angular"). dependency injection (DI) ------------------------- A design pattern and mechanism for creating and delivering some parts of an application (dependencies) to other parts of an application that require them. In Angular, dependencies are typically services, but they also can be values, such as strings or functions. An [injector](glossary#injector "injector - Glossary | Angular") for an application (created automatically during bootstrap) instantiates dependencies when needed, using a configured [provider](glossary#provider "provider - Glossary | Angular") of the service or value. Learn more in [Dependency Injection in Angular](dependency-injection "Dependency injection in Angular | Angular"). DI token -------- A lookup token associated with a dependency [provider](glossary#provider "provider - Glossary | Angular"), for use with the [dependency injection](glossary#dependency-injection-di "dependency injection (DI) - Glossary | Angular") system. directive --------- A class that can modify the structure of the DOM or modify attributes in the DOM and component data model. A directive class definition is immediately preceded by a `@[Directive](../api/core/directive)()` [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular") that supplies metadata. A directive class is usually associated with an HTML element or attribute, and that element or attribute is often referred to as the directive itself. When Angular finds a directive in an HTML [template](glossary#template "template - Glossary | Angular"), it creates the matching directive class instance and gives the instance control over that portion of the browser DOM. Angular has three categories of directive: * [Components](glossary#component "component - Glossary | Angular") use `@[Component](../api/core/component)()` to associate a template with a class. `@[Component](../api/core/component)()` is an extension of `@[Directive](../api/core/directive)()`. * [Attribute directives](glossary#attribute-directive "attribute directive - Glossary | Angular") modify behavior and appearance of page elements. * [Structural directives](glossary#structural-directive "structural directive - Glossary | Angular") modify the structure of the DOM. Angular supplies a number of built-in directives that begin with the `ng` prefix. You can also create new directives to implement your own functionality. You associate a *selector* with a custom directive; this extends the [template syntax](template-syntax "Template syntax | Angular") that you can use in your applications. A *selector* is an HTML tag, such as `<my-directive>`. **UpperCamelCase**, such as `[NgIf](../api/common/ngif)`, refers to a directive class. You can use **UpperCamelCase** when describing properties and directive behavior. **lowerCamelCase**, such as `[ngIf](../api/common/ngif)` refers to the attribute name of a directive. You can use **lowerCamelCase** when describing how to apply the directive to an element in the HTML template. domain-specific language (DSL) ------------------------------ A special-purpose library or API. To learn more, see [Domain-specific language](https://en.wikipedia.org/wiki/Domain-specific_language "Domain-specific language | Wikipedia"). Angular extends TypeScript with domain-specific languages for a number of domains relevant to Angular applications, defined in NgModules such as [animations](animations "Introduction to Angular animations | Angular"), [forms](forms "Building a template-driven form | Angular"), and [routing and navigation](router "Common Routing Tasks | Angular"). dynamic component loading ------------------------- A technique for adding a component to the DOM at run time. Requires that you exclude the component from compilation and then connect it to the change-detection and event-handling framework of Angular when you add it to the DOM. See also [custom element](glossary#custom-element "custom element - Glossary | Angular"), which provides an easier path with the same result. eager loading ------------- NgModules or components that are loaded on launch are referenced as eager-loaded, to distinguish them from those that are loaded at run time that are referenced as lazy-loaded. See also [lazy loading](glossary#lazy-loading "lazy loading - Glossary | Angular"). ECMAScript ---------- The [official JavaScript language specification](https://en.wikipedia.org/wiki/ECMAScript "ECMAScript | Wikipedia"). Not all browsers support the latest ECMAScript standard, but you can use a [transpiler](glossary#transpile "transpile - Glossary | Angular") to write code using the latest features, which will then be transpiled to code that runs on versions that are supported by browsers. An example of a [transpiler](glossary#transpile "transpile - Glossary | Angular") is [TypeScript](glossary#typescript "TypeScript - Glossary | Angular"). To learn more, see [Browser Support](browser-support "Browser support | Angular"). element ------- Angular defines an `[ElementRef](../api/core/elementref)` class to wrap render-specific native UI elements. In most cases, this allows you to use Angular templates and data binding to access DOM elements without reference to the native element. The documentation generally refers to *elements* as distinct from *DOM elements*. *Elements* are instances of a `[ElementRef](../api/core/elementref)` class. *DOM elements* are able to be accessed directly, if necessary. To learn more, see also [custom element](glossary#custom-element "custom element - Glossary | Angular"). entry point ----------- A [JavaScript module](glossary#module "module - Glossary | Angular") that is intended to be imported by a user of an [npm package](npm-packages "Workspace npm dependencies | Angular"). An entry-point module typically re-exports symbols from other internal modules. A package can contain multiple entry points. For example, the `@angular/core` package has two entry-point modules, which can be imported using the module names `@angular/core` and `@angular/core/testing`. form control ------------ An instance of `[FormControl](../api/forms/formcontrol)`, which is a fundamental building block for Angular forms. Together with `[FormGroup](../api/forms/formgroup)` and `[FormArray](../api/forms/formarray)`, tracks the value, validation, and status of a form input element. Read more forms in the [Introduction to forms in Angular](forms-overview "Introduction to forms in Angular | Angular"). form model ---------- The "source of truth" for the value and validation status of a form input element at a given point in time. When using [reactive forms](glossary#reactive-forms "reactive forms - Glossary | Angular"), the form model is created explicitly in the component class. When using [template-driven forms](glossary#template-driven-forms "template-driven forms - Glossary | Angular"), the form model is implicitly created by directives. Learn more about reactive and template-driven forms in the [Introduction to forms in Angular](forms-overview "Introduction to forms in Angular | Angular"). form validation --------------- A check that runs when form values change and reports whether the given values are correct and complete, according to the defined constraints. Reactive forms apply [validator functions](form-validation#adding-custom-validators-to-reactive-forms "Adding custom validators to reactive forms - Validating form input | Angular"). Template-driven forms use [validator directives](form-validation#adding-custom-validators-to-template-driven-forms "Adding custom validators to template-driven forms - Validating form input | Angular"). To learn more, see [Form Validation](form-validation "Validating form input | Angular"). immutability ------------ The inability to alter the state of a value after its creation. [Reactive forms](glossary#reactive-forms "reactive forms - Glossary | Angular") perform immutable changes in that each change to the data model produces a new data model rather than modifying the existing one. [Template-driven forms](glossary#template-driven-forms "template-driven forms - Glossary | Angular") perform mutable changes with `[NgModel](../api/forms/ngmodel)` and [two-way data binding](glossary#data-binding "data binding - Glossary | Angular") to modify the existing data model in place. injectable ---------- An Angular class or other definition that provides a dependency using the [dependency injection](glossary#dependency-injection-di "dependency injection (DI) - Glossary | Angular") mechanism. An injectable [service](glossary#service "service - Glossary | Angular") class must be marked by the `@[Injectable](../api/core/injectable)()` [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular"). Other items, such as constant values, can also be injectable. injector -------- An object in the Angular [dependency-injection](glossary#dependency-injection-di "dependency injection (DI) - Glossary | Angular") system that can find a named dependency in its cache or create a dependency using a configured [provider](glossary#provider "provider - Glossary | Angular"). Injectors are created for NgModules automatically as part of the bootstrap process and are inherited through the component hierarchy. * An injector provides a singleton instance of a dependency, and can inject this same instance in multiple components. * A hierarchy of injectors at the NgModule and component level can provide different instances of a dependency to their own components and child components. * You can configure injectors with different providers that can provide different implementations of the same dependency. Learn more about the injector hierarchy in [Hierarchical Dependency Injectors](hierarchical-dependency-injection "Hierarchical injectors | Angular"). input ----- When defining a [directive](glossary#directive "directive - Glossary | Angular"), the `@[Input](../api/core/input)()` decorator on a directive property makes that property available as a *target* of a [property binding](property-binding "Property binding | Angular"). Data values flow into an input property from the data source identified in the [template expression](glossary#template-expression "template expression - Glossary | Angular") to the right of the equal sign. To learn more, see [`@Input()` and `@Output()` decorator functions](inputs-outputs "Sharing data between child and parent directives and components | Angular"). interpolation ------------- A form of property [data binding](glossary#data-binding "data binding - Glossary | Angular") in which a [template expression](glossary#template-expression "template expression - Glossary | Angular") between double-curly braces renders as text. That text can be concatenated with neighboring text before it is assigned to an element property or displayed between element tags, as in this example. ``` <label>My current hero is {{hero.name}}</label> ``` Read more in the [Interpolation](interpolation "Text interpolation | Angular") guide. Ivy --- Ivy is the historical code name for the current [compilation and rendering pipeline](https://blog.angular.io/a-plan-for-version-8-0-and-ivy-b3318dfc19f7 "A plan for version 8.0 and Ivy | Angular Blog") in Angular. It is now the only supported engine, so everything uses Ivy. JavaScript ---------- To learn more, see [ECMAScript](glossary#ecmascript "ECMAScript - Glossary | Angular"). To learn more, see also [TypeScript](glossary#typescript "TypeScript - Glossary | Angular"). just-in-time (JIT) compilation ------------------------------ The Angular just-in-time (JIT) compiler converts your Angular HTML and TypeScript code into efficient JavaScript code at run time, as part of bootstrapping. JIT compilation is the default (as opposed to AOT compilation) when you run the `ng build` and `ng serve` Angular CLI commands, and is a good choice during development. JIT mode is strongly discouraged for production use because it results in large application payloads that hinder the bootstrap performance. Compare to [ahead-of-time (AOT) compilation](glossary#ahead-of-time-aot-compilation "ahead-of-time (AOT) compilation - Glossary | Angular"). lazy loading ------------ A process that speeds up application load time by splitting the application into multiple bundles and loading them on demand. For example, dependencies can be lazy loaded as needed. The example differs from [eager-loaded](glossary#eager-loading "eager loading - Glossary | Angular") modules that are required by the root module and are loaded on launch. The [router](glossary#router "router - Glossary | Angular") makes use of lazy loading to load child views only when the parent view is activated. Similarly, you can build custom elements that can be loaded into an Angular application when needed. library ------- In Angular, a [project](glossary#project "project - Glossary | Angular") that provides functionality that can be included in other Angular applications. A library is not a complete Angular application and cannot run independently. To add re-usable Angular functionality to non-Angular web applications, use Angular [custom elements](glossary#angular-element "Angular element - Glossary | Angular"). * Library developers can use the [Angular CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") to `generate` scaffolding for a new library in an existing [workspace](glossary#workspace "workspace - Glossary | Angular"), and can publish a library as an `npm` package. * Application developers can use the [Angular CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") to `add` a published library for use with an application in the same [workspace](glossary#workspace "workspace - Glossary | Angular"). See also [schematic](glossary#schematic "schematic - Glossary | Angular"). lifecycle hook -------------- An interface that allows you to tap into the lifecycle of [directives](glossary#directive "directive - Glossary | Angular") and [components](glossary#component "component - Glossary | Angular") as they are created, updated, and destroyed. Each interface has a single hook method whose name is the interface name prefixed with `ng`. For example, the `[OnInit](../api/core/oninit)` interface has a hook method named `ngOnInit`. Angular runs these hook methods in the following order: | | hook method | Details | | --- | --- | --- | | 1 | `ngOnChanges` | When an [input](glossary#input "input - Glossary | Angular") or [output](glossary#output "output - Glossary | Angular") binding value changes. | | 2 | `ngOnInit` | After the first `ngOnChanges`. | | 3 | `ngDoCheck` | Developer's custom change detection. | | 4 | `ngAfterContentInit` | After component content initialized. | | 5 | `ngAfterContentChecked` | After every check of component content. | | 6 | `ngAfterViewInit` | After the views of a component are initialized. | | 7 | `ngAfterViewChecked` | After every check of the views of a component. | | 8 | `ngOnDestroy` | Just before the directive is destroyed. | To learn more, see [Lifecycle Hooks](lifecycle-hooks "Lifecycle Hooks | Angular"). module ------ In general, a module collects a block of code dedicated to a single purpose. Angular uses standard JavaScript modules and also defines an Angular module, `[NgModule](../api/core/ngmodule)`. In JavaScript, or ECMAScript, each file is a module and all objects defined in the file belong to that module. Objects can be exported, making them public, and public objects can be imported for use by other modules. Angular ships as a collection of JavaScript modules. A collection of JavaScript modules are also referenced as a library. Each Angular library name begins with the `@angular` prefix. Install Angular libraries with the [npm package manager](https://docs.npmjs.com/about-npm "About npm | npm") and import parts of them with JavaScript `import` declarations. Compare to [NgModule](glossary#ngmodule "NgModule - Glossary | Angular"). ngcc ---- Angular compatibility compiler. If you build your application using [Ivy](glossary#ivy "Ivy - Glossary | Angular"), but it depends on libraries that have not been compiled with Ivy, the Angular CLI uses `ngcc` to automatically update the dependent libraries to use Ivy. NgModule -------- A class definition preceded by the `@[NgModule](../api/core/ngmodule)()` [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular"), which declares and serves as a manifest for a block of code dedicated to an application domain, a workflow, or a closely related set of capabilities. Like a [JavaScript module](glossary#module "module - Glossary | Angular"), an NgModule can export functionality for use by other NgModules and import public functionality from other NgModules. The metadata for an NgModule class collects components, directives, and pipes that the application uses along with the list of imports and exports. See also [declarable](glossary#declarable "declarable - Glossary | Angular"). NgModules are typically named after the file in which the exported thing is defined. For example, the Angular [DatePipe](../api/common/datepipe "DatePipe | @angular/common - API | Angular") class belongs to a feature module named `date_pipe` in the file `date_pipe.ts`. You import them from an Angular [scoped package](glossary#scoped-package "scoped package - Glossary | Angular") such as `@angular/core`. Every Angular application has a root module. By convention, the class is named `AppModule` and resides in a file named `app.module.ts`. To learn more, see [NgModules](ngmodules "NgModules | Angular"). npm package ----------- The [npm package manager](https://docs.npmjs.com/about-npm "About npm | npm") is used to distribute and load Angular modules and libraries. Learn more about how Angular uses [Npm Packages](npm-packages "Workspace npm dependencies | Angular"). ngc --- `ngc` is a Typescript-to-Javascript transpiler that processes Angular decorators, metadata, and templates, and emits JavaScript code. The most recent implementation is internally referred to as `ngtsc` because it is a minimalistic wrapper around the TypeScript compiler `tsc` that adds a transform for processing Angular code. observable ---------- A producer of multiple values, which it pushes to [subscribers](glossary#subscriber "subscriber - Glossary | Angular"). Used for asynchronous event handling throughout Angular. You execute an observable by subscribing to it with its `subscribe()` method, passing callbacks for notifications of new values, errors, or completion. Observables can deliver in one the following ways a single value or multiple values of any type to subscribers. * Synchronously as a function delivers a value to the requester * Scheduled A subscriber receives notification of new values as they are produced and notification of either normal completion or error completion. Angular uses a third-party library named [Reactive Extensions (RxJS)](https://rxjs.dev "RxJS"). To learn more, see [Observables](observables "Using observables to pass values | Angular"). observer -------- An object passed to the `subscribe()` method for an [observable](glossary#observable "observable - Glossary | Angular"). The object defines the callbacks for the [subscriber](glossary#subscriber "subscriber - Glossary | Angular"). output ------ When defining a [directive](glossary#directive "directive - Glossary | Angular"), the `@[Output](../api/core/output){}` decorator on a directive property makes that property available as a *target* of [event binding](event-binding "Event binding | Angular"). Events stream *out* of this property to the receiver identified in the [template expression](glossary#template-expression "template expression - Glossary | Angular") to the right of the equal sign. To learn more, see [`@Input()` and `@Output()` decorator functions](inputs-outputs "Sharing data between child and parent directives and components | Angular"). pipe ---- A class which is preceded by the `@[Pipe](../api/core/pipe){}` decorator and which defines a function that transforms input values to output values for display in a [view](glossary#view "view - Glossary | Angular"). Angular defines various pipes, and you can define new pipes. To learn more, see [Pipes](pipes "Transforming Data Using Pipes | Angular"). platform -------- In Angular terminology, a platform is the context in which an Angular application runs. The most common platform for Angular applications is a web browser, but it can also be an operating system for a mobile device, or a web server. Support for the various Angular run-time platforms is provided by the `@angular/platform-*` packages. These packages allow applications that make use of `@angular/core` and `@angular/common` to execute in different environments by providing implementation for gathering user input and rendering UIs for the given platform. Isolating platform-specific functionality allows the developer to make platform-independent use of the rest of the framework. * When running in a web browser, [`BrowserModule`](../api/platform-browser/browsermodule "BrowserModule | @angular/platform-browser - API | Angular") is imported from the `platform-browser` package, and supports services that simplify security and event processing, and allows applications to access browser-specific features, such as interpreting keyboard input and controlling the title of the document being displayed. All applications running in the browser use the same platform service. * When [server-side rendering (SSR)](glossary#server-side-rendering "server-side rendering - Glossary | Angular") is used, the [`platform-server`](../api/platform-server "@angular/platform-server | API | Angular") package provides web server implementations of the `DOM`, `XMLHttpRequest`, and other low-level features that do not rely on a browser. polyfill -------- An [npm package](npm-packages "Workspace npm dependencies | Angular") that plugs gaps in the JavaScript implementation of a browser. See [Browser Support](browser-support "Browser support | Angular") for polyfills that support particular functionality for particular platforms. project ------- In the Angular CLI, a standalone application or [library](glossary#library "library - Glossary | Angular") that can be created or modified by an Angular CLI command. A project, as generated by the [`ng new`](cli/new "ng new | CLI | Angular"), contains the set of source files, resources, and configuration files that you need to develop and test the application using the Angular CLI. Projects can also be created with the `ng generate application` and `ng generate library` commands. To learn more, see [Project File Structure](file-structure "Workspace and project file structure | Angular"). The [`angular.json`](workspace-config "Angular workspace configuration | Angular") file configures all projects in a [workspace](glossary#workspace "workspace - Glossary | Angular"). provider -------- An object that implements one of the [`Provider`](../api/core/provider "Provider | @angular/core - API | Angular") interfaces. A provider object defines how to obtain an injectable dependency associated with a [DI token](glossary#di-token "DI token - Glossary | Angular"). An [injector](glossary#injector "injector - Glossary | Angular") uses the provider to create a new instance of a dependency for a class that requires it. Angular registers its own providers with every injector, for services that Angular defines. You can register your own providers for services that your application needs. See also [service](glossary#service "service - Glossary | Angular"). See also [dependency injection](glossary#dependency-injection-di "dependency injection (DI) - Glossary | Angular"). Learn more in [Dependency Injection](dependency-injection "Dependency injection in Angular | Angular"). reactive forms -------------- A framework for building Angular forms through code in a component. The alternative is a [template-driven form](glossary#template-driven-forms "template-driven forms - Glossary | Angular"). When using reactive forms: * The "source of truth", the form model, is defined in the component class. * Validation is set up through validation functions rather than validation directives. * Each control is explicitly created in the component class by creating a `[FormControl](../api/forms/formcontrol)` instance manually or with `[FormBuilder](../api/forms/formbuilder)`. * The template input elements do *not* use `[ngModel](../api/forms/ngmodel)`. * The associated Angular directives are prefixed with `form`, such as `formControl`, `formGroup`, and `[formControlName](../api/forms/formcontrolname)`. The alternative is a template-driven form. For an introduction and comparison of both forms approaches, see [Introduction to Angular Forms](forms-overview "Introduction to forms in Angular | Angular"). resolver -------- A class that implements the [Resolve](../api/router/resolve "Resolve | @angular/router - API | Angular") interface that you use to produce or retrieve data that is needed before navigation to a requested route can be completed. You may use a function with the same signature as the [resolve()](../api/router/resolve "Resolve | @angular/router - API | Angular") method in place of the [Resolve](../api/router/resolve "Resolve | @angular/router - API | Angular") interface. Resolvers run after all [route guards](glossary#route-guard "route guard - Glossary | Angular") for a route tree have been executed and have succeeded. See an example of using a [resolve guard](router-tutorial-toh#resolve-pre-fetching-component-data "Resolve: pre-fetching component data - Router tutorial: tour of heroes | Angular") to retrieve dynamic data. route guard ----------- A method that controls navigation to a requested route in a routing application. Guards determine whether a route can be activated or deactivated, and whether a lazy-loaded module can be loaded. Learn more in the [Routing and Navigation](router#preventing-unauthorized-access "Preventing unauthorized access - Common Routing Tasks | Angular") guide. router ------ A tool that configures and implements navigation among states and [views](glossary#view "view - Glossary | Angular") within an Angular application. The `[Router](../api/router/router)` module is an [NgModule](glossary#ngmodule "NgModule - Glossary | Angular") that provides the necessary service providers and directives for navigating through application views. A [routing component](glossary#routing-component "routing component - Glossary | Angular") is one that imports the `[Router](../api/router/router)` module and whose template contains a `[RouterOutlet](../api/router/routeroutlet)` element where it can display views produced by the router. The router defines navigation among views on a single page, as opposed to navigation among pages. It interprets URL-like links to determine which views to create or destroy, and which components to load or unload. It allows you to take advantage of [lazy loading](glossary#lazy-loading "lazy loading - Glossary | Angular") in your Angular applications. To learn more, see [Routing and Navigation](router "Common Routing Tasks | Angular"). router outlet ------------- A [directive](glossary#directive "directive - Glossary | Angular") that acts as a placeholder in the template of a routing component. Angular dynamically renders the template based on the current router state. routing component ----------------- An Angular [component](glossary#component "component - Glossary | Angular") with a `[RouterOutlet](../api/router/routeroutlet)` directive in its template that displays views based on router navigations. To learn more, see [Routing and Navigation](router "Common Routing Tasks | Angular"). rule ---- In [schematics](glossary#schematic "schematic - Glossary | Angular"), a function that operates on a [file tree](glossary#tree "tree - Glossary | Angular") to create, delete, or modify files in a specific manner. schematic --------- A scaffolding library that defines how to generate or transform a programming project by creating, modifying, refactoring, or moving files and code. A schematic defines [rules](glossary#rule "rule - Glossary | Angular") that operate on a virtual file system referenced as a [tree](glossary#tree "tree - Glossary | Angular"). The [Angular CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") uses schematics to generate and modify [Angular projects](glossary#project "project - Glossary | Angular") and parts of projects. * Angular provides a set of schematics for use with the Angular CLI. See the [Angular CLI command reference](cli "CLI Overview and Command Reference | Angular"). The [`ng add`](cli/add "ng add | CLI | Angular") Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") command runs schematics as part of adding a library to your project. The [`ng generate`](cli/generate "ng generate | CLI | Angular") Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") command runs schematics to create applications, libraries, and Angular code constructs. * [Library](glossary#library "library - Glossary | Angular") developers can create schematics that enable the Angular CLI to add and update their published libraries, and to generate artifacts the library defines. Add these schematics to the npm package that you use to publish and share your library. To learn more, see [Schematics](schematics "Generating code using schematics | Angular"). To learn more, see also [Integrating Libraries with the CLI](creating-libraries#integrating-with-the-cli-using-code-generation-schematics "Integrating with the CLI using code-generation schematics - Creating libraries | Angular"). Schematics CLI -------------- Schematics come with their own command-line tool. Use Node 6.9 or above to install the Schematics CLI globally. ``` npm install -g @angular-devkit/schematics-cli ``` This installs the `schematics` executable, which you can use to create a new schematics [collection](glossary#collection "collection - Glossary | Angular") with an initial named schematic. The collection directory is a workspace for schematics. You can also use the `schematics` command to add a new schematic to an existing collection, or extend an existing schematic. scoped package -------------- A way to group related [npm packages](npm-packages "Workspace npm dependencies | Angular"). NgModules are delivered within scoped packages whose names begin with the Angular *scope name* `@angular`. For example, `@angular/core`, `@angular/common`, `@angular/forms`, and `@angular/router`. Import a scoped package in the same way that you import a normal package. ``` import { Component } from '@angular/core'; ``` server-side rendering --------------------- A technique that generates static application pages on the server, and can generate and serve those pages in response to requests from browsers. It can also pre-generate pages as HTML files that you serve later. This technique can improve performance on mobile and low-powered devices and improve the user experience by showing a static first page quickly while the client-side application is loading. The static version can also make your application more visible to web crawlers. You can easily prepare an application for server-side rendering by using the [Angular CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") to run the [Angular Universal](glossary#universal "Universal - Glossary | Angular") tool, using the `@nguniversal/express-engine` [schematic](glossary#schematic "schematic - Glossary | Angular"). service ------- In Angular, a class with the [@Injectable()](glossary#injectable "injectable - Glossary | Angular") decorator that encapsulates non-UI logic and code that can be reused across an application. Angular distinguishes components from services to increase modularity and reusability. The `@[Injectable](../api/core/injectable)()` metadata allows the service class to be used with the [dependency injection](glossary#dependency-injection-di "dependency injection (DI) - Glossary | Angular") mechanism. The injectable class is instantiated by a [provider](glossary#provider "provider - Glossary | Angular"). [Injectors](glossary#injector "injector - Glossary | Angular") maintain lists of providers and use them to provide service instances when they are required by components or other services. To learn more, see [Introduction to Services and Dependency Injection](architecture-services "Introduction to services and dependency injection | Angular"). standalone ---------- A configuration of [components](glossary#component "component - Glossary | Angular"), [directives](glossary#directive "directive - Glossary | Angular"), and [pipes](glossary#pipe "pipe - Glossary | Angular") to indicate that this class can be imported directly without declaring it in any [NgModule](glossary#ngmodule "NgModule - Glossary | Angular"). Standalone components, directives, and pipes differ from non-standalone ones by: * having the `standalone` field of their decorator set to `true`. * allowing their direct importing without the need to pass through NgModules. * specifying their dependencies directly in their decorator. To learn more, see the [Standalone components guide](standalone-components "Getting started with standalone components | Angular"). structural directive -------------------- A category of [directive](glossary#directive "directive - Glossary | Angular") that is responsible for shaping HTML layout by modifying the DOM. Modification of the DOM includes, adding, removing, or manipulating elements and the associated children. To learn more, see [Structural Directives](structural-directives "Structural directives | Angular"). subscriber ---------- A function that defines how to obtain or generate values or messages to be published. This function is executed when a consumer runs the `subscribe()` method of an [observable](glossary#observable "observable - Glossary | Angular"). The act of subscribing to an observable triggers its execution, associates callbacks with it, and creates a `Subscription` object that lets you unsubscribe. The `subscribe()` method takes an [observer](glossary#observer "observer - Glossary | Angular") JavaScript object with up to three callbacks, one for each type of notification that an observable can deliver. * The `next` notification sends a value such as a number, a string, or an object. * The `error` notification sends a JavaScript Error or exception. * The `complete` notification does not send a value, but the handler is run when the method completes. Scheduled values can continue to be returned after the method completes. target ------ A buildable or runnable subset of a [project](glossary#project "project - Glossary | Angular"), configured as an object in the [workspace configuration file](workspace-config#project-tool-configuration-options "Project tool configuration options - Angular workspace configuration | Angular"), and executed by an [Architect](glossary#architect "Architect - Glossary | Angular") [builder](glossary#builder "builder - Glossary | Angular"). In the `angular.json` file, each project has an "architect" section that contains targets which configure builders. Some of these targets correspond to Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") command, such as `build`, `serve`, `test`, and `lint`. For example, the Architect builder invoked by the `ng build` command to compile a project uses a particular build tool, and has a default configuration with values that you can override on the command line. The `build` target also defines an alternate configuration for a "development" build, which you can invoke with the `--configuration development` flag on the `build` command. The Architect tool provides a set of builders. The [`ng new`](cli/new "ng new | CLI | Angular") Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") command provides a set of targets for the initial application project. The [`ng generate application`](cli/generate#application "application - ng generate | CLI | Angular") and [`ng generate library`](cli/generate#library "library - ng generate | CLI | Angular") Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") commands provide a set of targets for each new [project](glossary#project "project - Glossary | Angular"). These targets, their options and configurations, can be customized to meet the needs of your project. For example, you may want to add a "staging" or "testing" configuration to the "build" target of a project. You can also define a custom builder, and add a target to the project configuration that uses your custom builder. You can then run the target using the [`ng run`](cli/run "ng run | CLI | Angular") Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") command. template -------- Code that defines how to render the [view](glossary#view "view - Glossary | Angular") of a component. A template combines straight HTML with Angular [data-binding](glossary#data-binding "data binding - Glossary | Angular") syntax, [directives](glossary#directive "directive - Glossary | Angular"), and [template expressions](glossary#template-expression "template expression - Glossary | Angular") (logical constructs). The Angular elements insert or calculate values that modify the HTML elements before the page is displayed. Learn more about Angular template language in the [Template Syntax](template-syntax "Template syntax | Angular") guide. A template is associated with a [component class](glossary#component "component - Glossary | Angular") through the `@[Component](../api/core/component)()` [decorator](glossary#decorator--decoration "decorator | decoration - Glossary | Angular"). The template code can be provided inline, as the value of the `template` property, or in a separate HTML file linked through the `templateUrl` property. Additional templates, represented by `[TemplateRef](../api/core/templateref)` objects, can define alternative or *embedded* views, which can be referenced from multiple components. template-driven forms --------------------- A format for building Angular forms using HTML forms and input elements in the view. The alternative format uses the [reactive forms](glossary#reactive-forms "reactive forms - Glossary | Angular") framework. When using template-driven forms: * The "source of truth" is the template. The validation is defined using attributes on the individual input elements. * [Two-way binding](glossary#data-binding "data binding - Glossary | Angular") with `[ngModel](../api/forms/ngmodel)` keeps the component model synchronized with the user's entry into the input elements. * Behind the scenes, Angular creates a new control for each input element, provided you have set up a `name` attribute and two-way binding for each input. * The associated Angular directives are prefixed with `ng` such as `[ngForm](../api/forms/ngform)`, `[ngModel](../api/forms/ngmodel)`, and `[ngModelGroup](../api/forms/ngmodelgroup)`. The alternative is a reactive form. For an introduction and comparison of both forms approaches, see [Introduction to Angular Forms](forms-overview "Introduction to forms in Angular | Angular"). template expression ------------------- A TypeScript-like syntax that Angular evaluates within a [data binding](glossary#data-binding "data binding - Glossary | Angular"). template reference variable --------------------------- A variable defined in a template that references an instance associated with an element, such as a directive instance, component instance, template as in `[TemplateRef](../api/core/templateref)`, or DOM element. After declaring a template reference variable on an element in a template, you can access values from that variable elsewhere within the same template. The following example defines a template reference variable named `#phone`. ``` <input #phone placeholder="phone number" /> ``` To learn more, see [Template reference variable](template-reference-variables "Template variables | Angular"). template input variable ----------------------- A template input variable is a variable you can reference within a single instance of the template. You declare a template input variable using the `let` keyword as in `let customer`. ``` <tr *ngFor="let customer of customers;"> <td>{{customer.customerNo}}</td> <td>{{customer.name}}</td> <td>{{customer.address}}</td> <td>{{customer.city}}</td> <td>{{customer.state}}</td> <button (click)="selectedCustomer=customer">Select</button> </tr> ``` Read and learn more about [template input variables](template-reference-variables#template-input-variable "Template input variable - Template variables | Angular"). token ----- An opaque identifier used for efficient table lookup. In Angular, a [DI token](glossary#di-token "DI token - Glossary | Angular") is used to find [providers](glossary#provider "provider - Glossary | Angular") of dependencies in the [dependency injection](glossary#dependency-injection-di "dependency injection (DI) - Glossary | Angular") system. transpile --------- The translation process that transforms one version of JavaScript to another version; for example, down-leveling ES2015 to the older ES5 version. tree ---- In [schematics](glossary#schematic "schematic - Glossary | Angular"), a virtual file system represented by the `Tree` class. Schematic [rules](glossary#rule "rule - Glossary | Angular") take a tree object as input, operate on them, and return a new tree object. TypeScript ---------- A programming language based on JavaScript that is notable for its optional typing system. TypeScript provides compile-time type checking and strong tooling support The type checking and tooling support include code completion, refactoring, inline documentation, and intelligent search. Many code editors and IDEs support TypeScript either natively or with plug-ins. TypeScript is the preferred language for Angular development. To learn more about TypeScript, see [typescriptlang.org](https://www.typescriptlang.org "TypeScript"). TypeScript configuration file ----------------------------- A file specifies the root files and the compiler options required to compile a TypeScript project. To learn more, see [TypeScript configuration](typescript-configuration "TypeScript configuration | Angular"). unidirectional data flow ------------------------ A data flow model where the component tree is always checked for changes in one direction from parent to child, which prevents cycles in the change detection graph. In practice, this means that data in Angular flows downward during change detection. A parent component can easily change values in its child components because the parent is checked first. A failure could occur, however, if a child component tries to change a value in its parent during change detection (inverting the expected data flow), because the parent component has already been rendered. In development mode, Angular throws the `ExpressionChangedAfterItHasBeenCheckedError` error if your application attempts to do this, rather than silently failing to render the new value. To avoid this error, a [lifecycle hook](lifecycle-hooks "Lifecycle Hooks | Angular") method that seeks to make such a change should trigger a new change detection run. The new run follows the same direction as before, but succeeds in picking up the new value. Universal --------- A tool for implementing [server-side rendering](glossary#server-side-rendering "server-side rendering - Glossary | Angular") of an Angular application. When integrated with an app, Universal generates and serves static pages on the server in response to requests from browsers. The initial static page serves as a fast-loading placeholder while the full application is being prepared for normal execution in the browser. To learn more, see [Angular Universal: server-side rendering](universal "Server-side rendering (SSR) with Angular Universal | Angular"). view ---- The smallest grouping of display elements that can be created and destroyed together. Angular renders a view under the control of one or more [directives](glossary#directive "directive - Glossary | Angular"). A [component](glossary#component "component - Glossary | Angular") class and its associated [template](glossary#template "template - Glossary | Angular") define a view. A view is specifically represented by a `[ViewRef](../api/core/viewref)` instance associated with a component. A view that belongs immediately to a component is referenced as a *host view*. Views are typically collected into [view hierarchies](glossary#view-hierarchy "view hierarchy - Glossary | Angular"). Properties of elements in a view can change dynamically, in response to user actions; the structure (number and order) of elements in a view cannot. You can change the structure of elements by inserting, moving, or removing nested views within their view containers. View hierarchies can be loaded and unloaded dynamically as the user navigates through the application, typically under the control of a [router](glossary#router "router - Glossary | Angular"). View Engine ----------- A previous compilation and rendering pipeline used by Angular. It has since been replaced by [Ivy](glossary#ivy "Ivy - Glossary | Angular") and is no longer in use. View Engine was deprecated in version 9 and removed in version 13. view hierarchy -------------- A tree of related views that can be acted on as a unit. The root view referenced as the *host view* of a component. A host view is the root of a tree of *embedded views*, collected in a `[ViewContainerRef](../api/core/viewcontainerref)` view container attached to an anchor element in the hosting component. The view hierarchy is a key part of Angular [change detection](glossary#change-detection " change detection - Glossary | Angular"). The view hierarchy does not imply a component hierarchy. Views that are embedded in the context of a particular hierarchy can be host views of other components. Those components can be in the same NgModule as the hosting component, or belong to other NgModules. web component ------------- See [custom element](glossary#custom-element "custom element - Glossary | Angular"). workspace --------- A collection of Angular [projects](glossary#project "project - Glossary | Angular") (that is, applications and libraries) powered by the Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") that are typically co-located in a single source-control repository (such as [git](https://git-scm.com "Git")). The [`ng new`](cli/new "ng new | CLI | Angular") Angular [CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") command creates a file system directory (the "workspace root"). In the workspace root, it also creates the workspace [configuration file](glossary#configuration "configuration - Glossary | Angular") (`angular.json`) and, by default, an initial application project with the same name. Commands that create or operate on applications and libraries (such as `add` and `generate`) must be executed from within a workspace directory. To learn more, see [Workspace Configuration](workspace-config "Angular workspace configuration | Angular"). workspace configuration ----------------------- A file named `angular.json` at the root level of an Angular [workspace](glossary#workspace "workspace - Glossary | Angular") provides workspace-wide and project-specific configuration defaults for build and development tools that are provided by or integrated with the [Angular CLI](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular"). To learn more, see [Workspace Configuration](workspace-config "Angular workspace configuration | Angular"). Additional project-specific configuration files are used by tools, such as `package.json` for the [npm package manager](glossary#npm-package "npm package - Glossary | Angular"), `tsconfig.json` for [TypeScript transpilation](glossary#transpile "transpile - Glossary | Angular"), and `tslint.json` for [TSLint](https://palantir.github.io/tslint "TSLint | Palantir | GitHub"). To learn more, see [Workspace and Project File Structure](file-structure "Workspace and project file structure | Angular"). zone ---- An execution context for a set of asynchronous tasks. Useful for debugging, profiling, and testing applications that include asynchronous operations such as event processing, promises, and runs to remote servers. An Angular application runs in a zone where it can respond to asynchronous events by checking for data changes and updating the information it displays by resolving [data bindings](glossary#data-binding "data binding - Glossary | Angular"). A zone client can take action before and after an async operation completes. Learn more about zones in this [Brian Ford video](https://www.youtube.com/watch?v=3IqtmUscE_U "Brian Ford - Zones - NG-Conf 2014 | YouTube"). Last reviewed on Mon Feb 28 2022
programming_docs
angular Understanding templates Understanding templates ======================= In Angular, a template is a blueprint for a fragment of a user interface (UI). Templates are written in HTML, and special syntax can be used within a template to build on many of Angular's features. Prerequisites ------------- Before learning template syntax, you should be familiar with the following: * [Angular concepts](architecture) * JavaScript * HTML * CSS Enhancing HTML -------------- Angular extends the HTML syntax in your templates with additional functionality. For example, Angular’s data binding syntax helps to set Document Object Model (DOM) properties dynamically. Almost all HTML syntax is valid template syntax. However, because an Angular template is only a fragment of the UI, it does not include elements such as `<html>`, `<body>`, or `<base>`. > To eliminate the risk of script injection attacks, Angular does not support the `<script>` element in templates. Angular ignores the `<script>` tag and outputs a warning to the browser console. For more information, see the [Security](security) page. > > More on template syntax ----------------------- You might also be interested in the following: [Interpolation Learn how to use interpolation and expressions in HTML. Interpolation](interpolation "Interpolation") [Property binding Set properties of target elements or directive @Input() decorators. Property binding](property-binding "Property binding") [Attribute binding Set the value of attributes. Attribute binding](attribute-binding "Attribute binding") [Class and style binding Set the value of class and style. Class and style binding](class-binding "Class and style binding") [Event binding Listen for events and your HTML. Event binding](event-binding "Event binding") [Template reference variables Use special variables to reference a DOM element within a template. Template reference variables](template-reference-variables "Template reference variables") [Built-in directives Listen to and modify the behavior and layout of HTML. Built-in directives](built-in-directives "Built-in directives") [Inputs and Outputs Share data between the parent context and child directives or components. Inputs and Outputs](inputs-outputs "Inputs and Outputs") Last reviewed on Wed May 11 2022 angular Testing services Testing services ================ To check that your services are working as you intend, you can write tests specifically for them. > If you'd like to experiment with the application that this guide describes, run it in your browser or download and run it locally. > > Services are often the smoothest files to unit test. Here are some synchronous and asynchronous unit tests of the `ValueService` written without assistance from Angular testing utilities. ``` // Straight Jasmine testing without Angular's testing support describe('ValueService', () => { let service: ValueService; beforeEach(() => { service = new ValueService(); }); it('#getValue should return real value', () => { expect(service.getValue()).toBe('real value'); }); it('#getObservableValue should return value from observable', (done: DoneFn) => { service.getObservableValue().subscribe(value => { expect(value).toBe('observable value'); done(); }); }); it('#getPromiseValue should return value from a promise', (done: DoneFn) => { service.getPromiseValue().then(value => { expect(value).toBe('promise value'); done(); }); }); }); ``` Services with dependencies -------------------------- Services often depend on other services that Angular injects into the constructor. In many cases, you can create and *inject* these dependencies by hand while calling the service's constructor. The `MasterService` is a simple example: ``` @Injectable() export class MasterService { constructor(private valueService: ValueService) { } getValue() { return this.valueService.getValue(); } } ``` `MasterService` delegates its only method, `getValue`, to the injected `ValueService`. Here are several ways to test it. ``` describe('MasterService without Angular testing support', () => { let masterService: MasterService; it('#getValue should return real value from the real service', () => { masterService = new MasterService(new ValueService()); expect(masterService.getValue()).toBe('real value'); }); it('#getValue should return faked value from a fakeService', () => { masterService = new MasterService(new FakeValueService()); expect(masterService.getValue()).toBe('faked service value'); }); it('#getValue should return faked value from a fake object', () => { const fake = { getValue: () => 'fake value' }; masterService = new MasterService(fake as ValueService); expect(masterService.getValue()).toBe('fake value'); }); it('#getValue should return stubbed value from a spy', () => { // create `getValue` spy on an object representing the ValueService const valueServiceSpy = jasmine.createSpyObj('ValueService', ['getValue']); // set the value to return when the `getValue` spy is called. const stubValue = 'stub value'; valueServiceSpy.getValue.and.returnValue(stubValue); masterService = new MasterService(valueServiceSpy); expect(masterService.getValue()) .withContext('service returned stub value') .toBe(stubValue); expect(valueServiceSpy.getValue.calls.count()) .withContext('spy method was called once') .toBe(1); expect(valueServiceSpy.getValue.calls.mostRecent().returnValue) .toBe(stubValue); }); }); ``` The first test creates a `ValueService` with `new` and passes it to the `MasterService` constructor. However, injecting the real service rarely works well as most dependent services are difficult to create and control. Instead, mock the dependency, use a dummy value, or create a [spy](https://jasmine.github.io/tutorials/your_first_suite#section-Spies) on the pertinent service method. > Prefer spies as they are usually the best way to mock services. > > These standard testing techniques are great for unit testing services in isolation. However, you almost always inject services into application classes using Angular dependency injection and you should have tests that reflect that usage pattern. Angular testing utilities make it straightforward to investigate how injected services behave. Testing services with the `[TestBed](../api/core/testing/testbed)` ------------------------------------------------------------------ Your application relies on Angular [dependency injection (DI)](dependency-injection) to create services. When a service has a dependent service, DI finds or creates that dependent service. And if that dependent service has its own dependencies, DI finds-or-creates them as well. As service *consumer*, you don't worry about any of this. You don't worry about the order of constructor arguments or how they're created. As a service *tester*, you must at least think about the first level of service dependencies but you *can* let Angular DI do the service creation and deal with constructor argument order when you use the `[TestBed](../api/core/testing/testbed)` testing utility to provide and create services. Angular `[TestBed](../api/core/testing/testbed)` ------------------------------------------------ The `[TestBed](../api/core/testing/testbed)` is the most important of the Angular testing utilities. The `[TestBed](../api/core/testing/testbed)` creates a dynamically-constructed Angular *test* module that emulates an Angular [@NgModule](ngmodules). The `TestBed.configureTestingModule()` method takes a metadata object that can have most of the properties of an [@NgModule](ngmodules). To test a service, you set the `providers` metadata property with an array of the services that you'll test or mock. ``` let service: ValueService; beforeEach(() => { TestBed.configureTestingModule({ providers: [ValueService] }); }); ``` Then inject it inside a test by calling `TestBed.inject()` with the service class as the argument. > **NOTE**: `TestBed.get()` was deprecated as of Angular version 9. To help minimize breaking changes, Angular introduces a new function called `TestBed.inject()`, which you should use instead. For information on the removal of `TestBed.get()`, see its entry in the [Deprecations index](deprecations#index). > > ``` it('should use ValueService', () => { service = TestBed.inject(ValueService); expect(service.getValue()).toBe('real value'); }); ``` Or inside the `beforeEach()` if you prefer to inject the service as part of your setup. ``` beforeEach(() => { TestBed.configureTestingModule({ providers: [ValueService] }); service = TestBed.inject(ValueService); }); ``` When testing a service with a dependency, provide the mock in the `providers` array. In the following example, the mock is a spy object. ``` let masterService: MasterService; let valueServiceSpy: jasmine.SpyObj<ValueService>; beforeEach(() => { const spy = jasmine.createSpyObj('ValueService', ['getValue']); TestBed.configureTestingModule({ // Provide both the service-to-test and its (spy) dependency providers: [ MasterService, { provide: ValueService, useValue: spy } ] }); // Inject both the service-to-test and its (spy) dependency masterService = TestBed.inject(MasterService); valueServiceSpy = TestBed.inject(ValueService) as jasmine.SpyObj<ValueService>; }); ``` The test consumes that spy in the same way it did earlier. ``` it('#getValue should return stubbed value from a spy', () => { const stubValue = 'stub value'; valueServiceSpy.getValue.and.returnValue(stubValue); expect(masterService.getValue()) .withContext('service returned stub value') .toBe(stubValue); expect(valueServiceSpy.getValue.calls.count()) .withContext('spy method was called once') .toBe(1); expect(valueServiceSpy.getValue.calls.mostRecent().returnValue) .toBe(stubValue); }); ``` Testing without `beforeEach()` ------------------------------ Most test suites in this guide call `beforeEach()` to set the preconditions for each `it()` test and rely on the `[TestBed](../api/core/testing/testbed)` to create classes and inject services. There's another school of testing that never calls `beforeEach()` and prefers to create classes explicitly rather than use the `[TestBed](../api/core/testing/testbed)`. Here's how you might rewrite one of the `MasterService` tests in that style. Begin by putting re-usable, preparatory code in a *setup* function instead of `beforeEach()`. ``` function setup() { const valueServiceSpy = jasmine.createSpyObj('ValueService', ['getValue']); const stubValue = 'stub value'; const masterService = new MasterService(valueServiceSpy); valueServiceSpy.getValue.and.returnValue(stubValue); return { masterService, stubValue, valueServiceSpy }; } ``` The `setup()` function returns an object literal with the variables, such as `masterService`, that a test might reference. You don't define *semi-global* variables (for example, `let masterService: MasterService`) in the body of the `describe()`. Then each test invokes `setup()` in its first line, before continuing with steps that manipulate the test subject and assert expectations. ``` it('#getValue should return stubbed value from a spy', () => { const { masterService, stubValue, valueServiceSpy } = setup(); expect(masterService.getValue()) .withContext('service returned stub value') .toBe(stubValue); expect(valueServiceSpy.getValue.calls.count()) .withContext('spy method was called once') .toBe(1); expect(valueServiceSpy.getValue.calls.mostRecent().returnValue) .toBe(stubValue); }); ``` Notice how the test uses [*destructuring assignment*](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) to extract the setup variables that it needs. ``` const { masterService, stubValue, valueServiceSpy } = setup(); ``` Many developers feel this approach is cleaner and more explicit than the traditional `beforeEach()` style. Although this testing guide follows the traditional style and the default [CLI schematics](https://github.com/angular/angular-cli) generate test files with `beforeEach()` and `[TestBed](../api/core/testing/testbed)`, feel free to adopt *this alternative approach* in your own projects. Testing HTTP services --------------------- Data services that make HTTP calls to remote servers typically inject and delegate to the Angular [`HttpClient`](http) service for XHR calls. You can test a data service with an injected `[HttpClient](../api/common/http/httpclient)` spy as you would test any service with a dependency. ``` let httpClientSpy: jasmine.SpyObj<HttpClient>; let heroService: HeroService; beforeEach(() => { // TODO: spy on other methods too httpClientSpy = jasmine.createSpyObj('HttpClient', ['get']); heroService = new HeroService(httpClientSpy); }); it('should return expected heroes (HttpClient called once)', (done: DoneFn) => { const expectedHeroes: Hero[] = [{ id: 1, name: 'A' }, { id: 2, name: 'B' }]; httpClientSpy.get.and.returnValue(asyncData(expectedHeroes)); heroService.getHeroes().subscribe({ next: heroes => { expect(heroes) .withContext('expected heroes') .toEqual(expectedHeroes); done(); }, error: done.fail }); expect(httpClientSpy.get.calls.count()) .withContext('one call') .toBe(1); }); it('should return an error when the server returns a 404', (done: DoneFn) => { const errorResponse = new HttpErrorResponse({ error: 'test 404 error', status: 404, statusText: 'Not Found' }); httpClientSpy.get.and.returnValue(asyncError(errorResponse)); heroService.getHeroes().subscribe({ next: heroes => done.fail('expected an error, not heroes'), error: error => { expect(error.message).toContain('test 404 error'); done(); } }); }); ``` > The `HeroService` methods return `Observables`. You must *subscribe* to an observable to (a) cause it to execute and (b) assert that the method succeeds or fails. > > The `subscribe()` method takes a success (`next`) and fail (`error`) callback. Make sure you provide *both* callbacks so that you capture errors. Neglecting to do so produces an asynchronous uncaught observable error that the test runner will likely attribute to a completely different test. > > `[HttpClientTestingModule](../api/common/http/testing/httpclienttestingmodule)` ------------------------------------------------------------------------------- Extended interactions between a data service and the `[HttpClient](../api/common/http/httpclient)` can be complex and difficult to mock with spies. The `[HttpClientTestingModule](../api/common/http/testing/httpclienttestingmodule)` can make these testing scenarios more manageable. While the *code sample* accompanying this guide demonstrates `[HttpClientTestingModule](../api/common/http/testing/httpclienttestingmodule)`, this page defers to the [Http guide](http#testing-http-requests), which covers testing with the `[HttpClientTestingModule](../api/common/http/testing/httpclienttestingmodule)` in detail. Last reviewed on Mon Feb 28 2022 angular Binding syntax Binding syntax ============== Data binding automatically keeps your page up-to-date based on your application's state. You use data binding to specify things such as the source of an image, the state of a button, or data for a particular user. > See the live example for a working example containing the code snippets in this guide. > > Data binding and HTML --------------------- Developers can customize HTML by specifying attributes with string values. In the following example, `class`, `src`, and `disabled` modify the `<div>`, `<[img](../api/common/ngoptimizedimage)>`, and `<button>` elements respectively. ``` <div class="special">Plain old HTML</div> <img src="images/item.png"> <button disabled>Save</button> ``` Use data binding to control things like the state of a button: ``` <!-- Bind button disabled state to `isUnchanged` property --> <button type="button" [disabled]="isUnchanged">Save</button> ``` Notice that the binding is to the `disabled` property of the button's DOM element, not the attribute. Data binding works with properties of DOM elements, components, and directives, not HTML attributes. ### HTML attributes and DOM properties Angular binding distinguishes between HTML attributes and DOM properties. Attributes initialize DOM properties and you can configure them to modify an element's behavior. Properties are features of DOM nodes. * A few HTML attributes have 1:1 mapping to properties; for example, ``` id ``` * Some HTML attributes don't have corresponding properties; for example, ``` aria-* ``` * Some DOM properties don't have corresponding attributes; for example, ``` textContent ``` > Remember that HTML attributes and DOM properties are different things, even when they have the same name. > > In Angular, the only role of HTML attributes is to initialize element and directive state. When you write a data binding, you're dealing exclusively with the DOM properties and events of the target object. #### Example 1: an `<input>` When the browser renders `<input type="text" value="Sarah">`, it creates a corresponding DOM node with a `value` property and initializes that `value` to "Sarah". ``` <input type="text" value="Sarah"> ``` When the user enters `Sally` into the `<input>`, the DOM element `value` property becomes `Sally`. However, if you look at the HTML attribute `value` using `input.getAttribute('value')`, you can see that the attribute remains unchanged —it returns "Sarah". The HTML attribute `value` specifies the initial value; the DOM `value` property is the current value. To see attributes versus DOM properties in a functioning app, see the especially for binding syntax. #### Example 2: a disabled button A button's `disabled` property is `false` by default so the button is enabled. When you add the `disabled` attribute, you are initializing the button's `disabled` property to `true` which disables the button. ``` <button disabled>Test Button</button> ``` Adding and removing the `disabled` attribute disables and enables the button. However, the value of the attribute is irrelevant, which is why you cannot enable a button by writing `<button disabled="false">Still Disabled</button>`. To control the state of the button, set the `disabled` property instead. #### Property and attribute comparison Though you could technically set the `[attr.disabled]` attribute binding, the values are different in that the property binding must be a boolean value, while its corresponding attribute binding relies on whether the value is `null` or not. Consider the following: ``` <input [disabled]="condition ? true : false"> <input [attr.disabled]="condition ? 'disabled' : null"> ``` The first line, which uses the `disabled` property, uses a boolean value. The second line, which uses the disabled attribute checks for `null`. Generally, use property binding over attribute binding as a boolean value is easy to read, the syntax is shorter, and a property is more performant. To see the `disabled` button example in a functioning application, see the live example. This example shows you how to toggle the disabled property from the component. Types of data binding --------------------- Angular provides three categories of data binding according to the direction of data flow: * From source to view * From view to source * In a two-way sequence of view to source to view | Type | Syntax | Category | | --- | --- | --- | | Interpolation Property Attribute Class Style | ``` {{expression}} [target]="expression" ``` | One-way from data source to view target | | Event | ``` (target)="statement" ``` | One-way from view target to data source | | Two-way | ``` [(target)]="expression" ``` | Two-way | Binding types other than interpolation have a target name to the left of the equal sign. The target of a binding is a property or event, which you surround with square bracket (`[ ]`) characters, parenthesis (`( )`) characters, or both (`[( )]`) characters. The binding punctuation of `[]`, `()`, `[()]`, and the prefix specify the direction of data flow. * Use `[]` to bind from source to view * Use `()` to bind from view to source * Use `[()]` to bind in a two-way sequence of view to source to view Place the expression or statement to the right of the equal sign within double quote (`""`) characters. For more information see [Interpolation](interpolation) and [Template statements](template-statements). Binding types and targets ------------------------- The target of a data binding can be a property, an event, or an attribute name. Every public member of a source directive is automatically available for binding in a template expression or statement. The following table summarizes the targets for the different binding types. | Type | Target | Examples | | --- | --- | --- | | Property | Element property Component property Directive property | `alt`, `src`, `hero`, and `[ngClass](../api/common/ngclass)` in the following: ``` <img [alt]="hero.name" [src]="heroImageUrl"> <app-hero-detail [hero]="currentHero"></app-hero-detail> <div [ngClass]="{'special': isSpecial}"></div> ``` | | Event | Element event Component event Directive event | `click`, `deleteRequest`, and `myClick` in the following: ``` <button type="button" (click)="onSave()">Save</button> <app-hero-detail (deleteRequest)="deleteHero()"></app-hero-detail> <div (myClick)="clicked=$event" clickable>click me</div> ``` | | Two-way | Event and property | ``` <input [(ngModel)]="name"> ``` | | Attribute | Attribute (the exception) | ``` <button type="button" [attr.aria-label]="help">help</button> ``` | | Class | `class` property | ``` <div [class.special]="isSpecial">Special</div> ``` | | Style | `[style](../api/animations/style)` property | ``` <button type="button" [style.color]="isSpecial ? 'red' : 'green'"> ``` | Last reviewed on Mon Feb 28 2022
programming_docs
angular Angular elements overview Angular elements overview ========================= *Angular elements* are Angular components packaged as *custom elements* (also called Web Components), a web standard for defining new HTML elements in a framework-agnostic way. > For the sample application that this page describes, see the live example. > > [Custom elements](https://developer.mozilla.org/docs/Web/Web_Components/Using_custom_elements) are a Web Platform feature currently supported by Chrome, Edge (Chromium-based), Firefox, Opera, and Safari, and available in other browsers through polyfills (see [Browser Support](elements#browser-support)). A custom element extends HTML by allowing you to define a tag whose content is created and controlled by JavaScript code. The browser maintains a `CustomElementRegistry` of defined custom elements, which maps an instantiable JavaScript class to an HTML tag. The `@angular/elements` package exports a `[createCustomElement](../api/elements/createcustomelement)()` API that provides a bridge from Angular's component interface and change detection functionality to the built-in DOM API. Transforming a component to a custom element makes all of the required Angular infrastructure available to the browser. Creating a custom element is simple and straightforward, and automatically connects your component-defined view with change detection and data binding, mapping Angular functionality to the corresponding built-in HTML equivalents. > We are working on custom elements that can be used by web apps built on other frameworks. A minimal, self-contained version of the Angular framework is injected as a service to support the component's change-detection and data-binding functionality. For more about the direction of development, check out this [video presentation](https://www.youtube.com/watch?v=Z1gLFPLVJjY&t=4s). > > Using custom elements --------------------- Custom elements bootstrap themselves - they start automatically when they are added to the DOM, and are automatically destroyed when removed from the DOM. Once a custom element is added to the DOM for any page, it looks and behaves like any other HTML element, and does not require any special knowledge of Angular terms or usage conventions. | | Details | | --- | --- | | Easy dynamic content in an Angular application | Transforming a component to a custom element provides a straightforward path to creating dynamic HTML content in your Angular application. HTML content that you add directly to the DOM in an Angular application is normally displayed without Angular processing, unless you define a *dynamic component*, adding your own code to connect the HTML tag to your application data, and participate in change detection. With a custom element, all of that wiring is taken care of automatically. | | Content-rich applications | If you have a content-rich application, such as the Angular app that presents this documentation, custom elements let you give your content providers sophisticated Angular functionality without requiring knowledge of Angular. For example, an Angular guide like this one is added directly to the DOM by the Angular navigation tools, but can include special elements like `<code-snippet>` that perform complex operations. All you need to tell your content provider is the syntax of your custom element. They don't need to know anything about Angular, or anything about your component's data structures or implementation. | ### How it works Use the `[createCustomElement](../api/elements/createcustomelement)()` function to convert a component into a class that can be registered with the browser as a custom element. After you register your configured class with the browser's custom-element registry, use the new element just like a built-in HTML element in content that you add directly into the DOM: ``` <my-popup message="Use Angular!"></my-popup> ``` When your custom element is placed on a page, the browser creates an instance of the registered class and adds it to the DOM. The content is provided by the component's template, which uses Angular template syntax, and is rendered using the component and DOM data. Input properties in the component correspond to input attributes for the element. Transforming components to custom elements ------------------------------------------ Angular provides the `[createCustomElement](../api/elements/createcustomelement)()` function for converting an Angular component, together with its dependencies, to a custom element. The function collects the component's observable properties, along with the Angular functionality the browser needs to create and destroy instances, and to detect and respond to changes. The conversion process implements the `[NgElementConstructor](../api/elements/ngelementconstructor)` interface, and creates a constructor class that is configured to produce a self-bootstrapping instance of your component. Use the built-in [`customElements.define()`](https://developer.mozilla.org/docs/Web/API/CustomElementRegistry/define) function to register the configured constructor and its associated custom-element tag with the browser's [`CustomElementRegistry`](https://developer.mozilla.org/docs/Web/API/CustomElementRegistry). When the browser encounters the tag for the registered element, it uses the constructor to create a custom-element instance. > Avoid using the [`@Component`](../api/core/component) [selector](../api/core/directive#selector) as the custom-element tag name. This can lead to unexpected behavior, due to Angular creating two component instances for a single DOM element: One regular Angular component and a second one using the custom element. > > ### Mapping A custom element *hosts* an Angular component, providing a bridge between the data and logic defined in the component and standard DOM APIs. Component properties and logic maps directly into HTML attributes and the browser's event system. * The creation API parses the component looking for input properties, and defines corresponding attributes for the custom element. It transforms the property names to make them compatible with custom elements, which do not recognize case distinctions. The resulting attribute names use dash-separated lowercase. For example, for a component with `@[Input](../api/core/input)('myInputProp') inputProp`, the corresponding custom element defines an attribute `my-input-prop`. * Component outputs are dispatched as HTML [Custom Events](https://developer.mozilla.org/docs/Web/API/CustomEvent), with the name of the custom event matching the output name. For example, for a component with `@[Output](../api/core/output)() valueChanged = new [EventEmitter](../api/core/eventemitter)()`, the corresponding custom element dispatches events with the name "valueChanged", and the emitted data is stored on the event's `detail` property. If you provide an alias, that value is used; for example, `@[Output](../api/core/output)('myClick') clicks = new [EventEmitter](../api/core/eventemitter)<string>();` results in dispatch events with the name "myClick". For more information, see Web Component documentation for [Creating custom events](https://developer.mozilla.org/docs/Web/Guide/Events/Creating_and_triggering_events#Creating_custom_events). Browser support for custom elements ----------------------------------- The recently-developed [custom elements](https://developer.mozilla.org/docs/Web/Web_Components/Using_custom_elements) Web Platform feature is currently supported natively in a number of browsers. | Browser | Custom Element Support | | --- | --- | | Chrome | Supported natively. | | Edge (Chromium-based) | Supported natively. | | Firefox | Supported natively. | | Opera | Supported natively. | | Safari | Supported natively. | To add the `@angular/elements` package to your workspace, run the following command: ``` npm install @angular/elements --save ``` Example: A Popup Service ------------------------ Previously, when you wanted to add a component to an application at runtime, you had to define a *dynamic component*, and then you would have to load it, attach it to an element in the DOM, and wire up all of the dependencies, change detection, and event handling, as described in [Dynamic Component Loader](dynamic-component-loader). Using an Angular custom element makes the process much simpler and more transparent, by providing all of the infrastructure and framework automatically —all you have to do is define the kind of event handling you want. (You do still have to exclude the component from compilation, if you are not going to use it in your application.) The following Popup Service example application defines a component that you can either load dynamically or convert to a custom element. | Files | Details | | --- | --- | | `popup.component.ts` | Defines a simple pop-up element that displays an input message, with some animation and styling. | | `popup.service.ts` | Creates an injectable service that provides two different ways to invoke the `PopupComponent`; as a dynamic component, or as a custom element Notice how much more setup is required for the dynamic-loading method. | | `app.module.ts` | Adds the `PopupComponent` in the module's `declarations` list. | | `app.component.ts` | Defines the application's root component, which uses the `PopupService` to add the pop-up to the DOM at run time. When the application runs, the root component's constructor converts `PopupComponent` to a custom element. | For comparison, the demo shows both methods. One button adds the popup using the dynamic-loading method, and the other uses the custom element. The result is the same; only the preparation is different. ``` import { Component, EventEmitter, HostBinding, Input, Output } from '@angular/core'; import { animate, state, style, transition, trigger } from '@angular/animations'; @Component({ selector: 'my-popup', template: ` <span>Popup: {{message}}</span> <button type="button" (click)="closed.next()">&#x2716;</button> `, animations: [ trigger('state', [ state('opened', style({transform: 'translateY(0%)'})), state('void, closed', style({transform: 'translateY(100%)', opacity: 0})), transition('* => *', animate('100ms ease-in')), ]) ], styles: [` :host { position: absolute; bottom: 0; left: 0; right: 0; background: #009cff; height: 48px; padding: 16px; display: flex; justify-content: space-between; align-items: center; border-top: 1px solid black; font-size: 24px; } button { border-radius: 50%; } `] }) export class PopupComponent { @HostBinding('@state') state: 'opened' | 'closed' = 'closed'; @Input() get message(): string { return this._message; } set message(message: string) { this._message = message; this.state = 'opened'; } private _message = ''; @Output() closed = new EventEmitter<void>(); } ``` ``` import { ApplicationRef, ComponentFactoryResolver, Injectable, Injector } from '@angular/core'; import { NgElement, WithProperties } from '@angular/elements'; import { PopupComponent } from './popup.component'; @Injectable() export class PopupService { constructor(private injector: Injector, private applicationRef: ApplicationRef, private componentFactoryResolver: ComponentFactoryResolver) {} // Previous dynamic-loading method required you to set up infrastructure // before adding the popup to the DOM. showAsComponent(message: string) { // Create element const popup = document.createElement('popup-component'); // Create the component and wire it up with the element const factory = this.componentFactoryResolver.resolveComponentFactory(PopupComponent); const popupComponentRef = factory.create(this.injector, [], popup); // Attach to the view so that the change detector knows to run this.applicationRef.attachView(popupComponentRef.hostView); // Listen to the close event popupComponentRef.instance.closed.subscribe(() => { document.body.removeChild(popup); this.applicationRef.detachView(popupComponentRef.hostView); }); // Set the message popupComponentRef.instance.message = message; // Add to the DOM document.body.appendChild(popup); } // This uses the new custom-element method to add the popup to the DOM. showAsElement(message: string) { // Create element const popupEl: NgElement & WithProperties<PopupComponent> = document.createElement('popup-element') as any; // Listen to the close event popupEl.addEventListener('closed', () => document.body.removeChild(popupEl)); // Set the message popupEl.message = message; // Add to the DOM document.body.appendChild(popupEl); } } ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { AppComponent } from './app.component'; import { PopupComponent } from './popup.component'; import { PopupService } from './popup.service'; @NgModule({ imports: [BrowserModule, BrowserAnimationsModule], providers: [PopupService], declarations: [AppComponent, PopupComponent], bootstrap: [AppComponent], }) export class AppModule { } ``` ``` import { Component, Injector } from '@angular/core'; import { createCustomElement } from '@angular/elements'; import { PopupService } from './popup.service'; import { PopupComponent } from './popup.component'; @Component({ selector: 'app-root', template: ` <input #input value="Message"> <button type="button" (click)="popup.showAsComponent(input.value)">Show as component</button> <button type="button" (click)="popup.showAsElement(input.value)">Show as element</button> `, }) export class AppComponent { constructor(injector: Injector, public popup: PopupService) { // Convert `PopupComponent` to a custom element. const PopupElement = createCustomElement(PopupComponent, {injector}); // Register the custom element with the browser. customElements.define('popup-element', PopupElement); } } ``` Typings for custom elements --------------------------- Generic DOM APIs, such as `document.createElement()` or `document.querySelector()`, return an element type that is appropriate for the specified arguments. For example, calling `document.createElement('a')` returns an `HTMLAnchorElement`, which TypeScript knows has an `href` property. Similarly, `document.createElement('div')` returns an `HTMLDivElement`, which TypeScript knows has no `href` property. When called with unknown elements, such as a custom element name (`popup-element` in our example), the methods return a generic type, such as `HTMLElement`, because TypeScript can't infer the correct type of the returned element. Custom elements created with Angular extend `[NgElement](../api/elements/ngelement)` (which in turn extends `HTMLElement`). Additionally, these custom elements will have a property for each input of the corresponding component. For example, our `popup-element` has a `message` property of type `string`. There are a few options if you want to get correct types for your custom elements. Assume you create a `my-dialog` custom element based on the following component: ``` @Component(…) class MyDialog { @Input() content: string; } ``` The most straightforward way to get accurate typings is to cast the return value of the relevant DOM methods to the correct type. For that, use the `[NgElement](../api/elements/ngelement)` and `[WithProperties](../api/elements/withproperties)` types (both exported from `@angular/elements`): ``` const aDialog = document.createElement('my-dialog') as NgElement & WithProperties<{content: string}>; aDialog.content = 'Hello, world!'; aDialog.content = 123; // <-- ERROR: TypeScript knows this should be a string. aDialog.body = 'News'; // <-- ERROR: TypeScript knows there is no `body` property on `aDialog`. ``` This is a good way to quickly get TypeScript features, such as type checking and autocomplete support, for your custom element. But it can get cumbersome if you need it in several places, because you have to cast the return type on every occurrence. An alternative way, that only requires defining each custom element's type once, is augmenting the `HTMLElementTagNameMap`, which TypeScript uses to infer the type of a returned element based on its tag name (for DOM methods such as `document.createElement()`, `document.querySelector()`, etc.): ``` declare global { interface HTMLElementTagNameMap { 'my-dialog': NgElement & WithProperties<{content: string}>; 'my-other-element': NgElement & WithProperties<{foo: 'bar'}>; … } } ``` Now, TypeScript can infer the correct type the same way it does for built-in elements: ``` document.createElement('div') //--> HTMLDivElement (built-in element) document.querySelector('foo') //--> Element (unknown element) document.createElement('my-dialog') //--> NgElement & WithProperties<{content: string}> (custom element) document.querySelector('my-other-element') //--> NgElement & WithProperties<{foo: 'bar'}> (custom element) ``` Last reviewed on Mon Feb 28 2022 angular Optimizing client application size with lightweight injection tokens Optimizing client application size with lightweight injection tokens ==================================================================== This page provides a conceptual overview of a dependency injection technique that is recommended for library developers. Designing your library with *lightweight injection tokens* helps optimize the bundle size of client applications that use your library. You can manage the dependency structure among your components and injectable services to optimize bundle size by using [tree-shakable providers](architecture-services#introduction-to-services-and-dependency-injection). This normally ensures that if a provided component or service is never actually used by the application, the compiler can remove its code from the bundle. Due to the way Angular stores injection tokens, it is possible that such an unused component or service can end up in the bundle anyway. This page describes a dependency-injection design pattern that supports proper tree-shaking by using lightweight injection tokens. The lightweight injection token design pattern is especially important for library developers. It ensures that when an application uses only some of your library's capabilities, the unused code can be eliminated from the client's application bundle. When an application uses your library, there might be some services that your library supplies which the client application doesn't use. In this case, the application developer should expect that service to be tree-shaken, and not contribute to the size of the compiled application. Because the application developer cannot know about or remedy a tree-shaking problem in the library, it is the responsibility of the library developer to do so. To prevent the retention of unused components, your library should use the lightweight injection token design pattern. When tokens are retained ------------------------ To better explain the condition under which token retention occurs, consider a library that provides a library-card component. This component contains a body and can contain an optional header. ``` <lib-card> <lib-header>…</lib-header> </lib-card> ``` In a likely implementation, the `<lib-card>` component uses `@[ContentChild](../api/core/contentchild)()` or `@[ContentChildren](../api/core/contentchildren)()` to get `<lib-header>` and `<lib-body>`, as in the following. ``` @Component({ selector: 'lib-header', …, }) class LibHeaderComponent {} @Component({ selector: 'lib-card', …, }) class LibCardComponent { @ContentChild(LibHeaderComponent) header: LibHeaderComponent|null = null; } ``` Because `<lib-header>` is optional, the element can appear in the template in its minimal form, `<lib-card></lib-card>`. In this case, `<lib-header>` is not used and you would expect it to be tree-shaken, but that is not what happens. This is because `LibCardComponent` actually contains two references to the `LibHeaderComponent`. ``` @ContentChild(LibHeaderComponent) header: LibHeaderComponent; ``` * One of these reference is in the *type position*-- that is, it specifies `LibHeaderComponent` as a type: `header: LibHeaderComponent;`. * The other reference is in the *value position*-- that is, LibHeaderComponent is the value of the `@[ContentChild](../api/core/contentchild)()` parameter decorator: `@[ContentChild](../api/core/contentchild)(LibHeaderComponent)`. The compiler handles token references in these positions differently. * The compiler erases *type position* references after conversion from TypeScript, so they have no impact on tree-shaking. * The compiler must keep *value position* references at runtime, which prevents the component from being tree-shaken. In the example, the compiler retains the `LibHeaderComponent` token that occurs in the value position. This prevents the referenced component from being tree-shaken, even if the application developer does not actually use `<lib-header>` anywhere. If `LibHeaderComponent` 's code, template, and styles combined becomes too large, including it unnecessarily can significantly increase the size of the client application. When to use the lightweight injection token pattern --------------------------------------------------- The tree-shaking problem arises when a component is used as an injection token. There are two cases when that can happen. * The token is used in the value position of a [content query](lifecycle-hooks#using-aftercontent-hooks "See more about using content queries."). * The token is used as a type specifier for constructor injection. In the following example, both uses of the `OtherComponent` token cause retention of `OtherComponent`, preventing it from being tree-shaken when it is not used. ``` class MyComponent { constructor(@Optional() other: OtherComponent) {} @ContentChild(OtherComponent) other: OtherComponent|null; } ``` Although tokens used only as type specifiers are removed when converted to JavaScript, all tokens used for dependency injection are needed at runtime. These effectively change `constructor(@[Optional](../api/core/optional)() other: OtherComponent)` to `constructor(@[Optional](../api/core/optional)() @[Inject](../api/core/inject)(OtherComponent) other)`. The token is now in a value position, and causes the tree shaker to keep the reference. > For all services, a library should use [tree-shakable providers](architecture-services#introduction-to-services-and-dependency-injection), providing dependencies at the root level rather than in component constructors. > > Using lightweight injection tokens ---------------------------------- The lightweight injection token design pattern consists of using a small abstract class as an injection token, and providing the actual implementation at a later stage. The abstract class is retained, not tree-shaken, but it is small and has no material impact on the application size. The following example shows how this works for the `LibHeaderComponent`. ``` abstract class LibHeaderToken {} @Component({ selector: 'lib-header', providers: [ {provide: LibHeaderToken, useExisting: LibHeaderComponent} ] …, }) class LibHeaderComponent extends LibHeaderToken {} @Component({ selector: 'lib-card', …, }) class LibCardComponent { @ContentChild(LibHeaderToken) header: LibHeaderToken|null = null; } ``` In this example, the `LibCardComponent` implementation no longer refers to `LibHeaderComponent` in either the type position or the value position. This lets full tree shaking of `LibHeaderComponent` take place. The `LibHeaderToken` is retained, but it is only a class declaration, with no concrete implementation. It is small and does not materially impact the application size when retained after compilation. Instead, `LibHeaderComponent` itself implements the abstract `LibHeaderToken` class. You can safely use that token as the provider in the component definition, allowing Angular to correctly inject the concrete type. To summarize, the lightweight injection token pattern consists of the following. 1. A lightweight injection token that is represented as an abstract class. 2. A component definition that implements the abstract class. 3. Injection of the lightweight pattern, using `@[ContentChild](../api/core/contentchild)()` or `@[ContentChildren](../api/core/contentchildren)()`. 4. A provider in the implementation of the lightweight injection token which associates the lightweight injection token with the implementation. ### Use the lightweight injection token for API definition A component that injects a lightweight injection token might need to invoke a method in the injected class. The token is now an abstract class. Since the injectable component implements that class, you must also declare an abstract method in the abstract lightweight injection token class. The implementation of the method, with all its code overhead, resides in the injectable component that can be tree-shaken. This lets the parent communicate with the child, if it is present, in a type-safe manner. For example, the `LibCardComponent` now queries `LibHeaderToken` rather than `LibHeaderComponent`. The following example shows how the pattern lets `LibCardComponent` communicate with the `LibHeaderComponent` without actually referring to `LibHeaderComponent`. ``` abstract class LibHeaderToken { abstract doSomething(): void; } @Component({ selector: 'lib-header', providers: [ {provide: LibHeaderToken, useExisting: LibHeaderComponent} ] …, }) class LibHeaderComponent extends LibHeaderToken { doSomething(): void { // Concrete implementation of `doSomething` } } @Component({ selector: 'lib-card', …, }) class LibCardComponent implement AfterContentInit { @ContentChild(LibHeaderToken) header: LibHeaderToken|null = null; ngAfterContentInit(): void { this.header && this.header.doSomething(); } } ``` In this example the parent queries the token to get the child component, and stores the resulting component reference if it is present. Before calling a method in the child, the parent component checks to see if the child component is present. If the child component has been tree-shaken, there is no runtime reference to it, and no call to its method. ### Naming your lightweight injection token Lightweight injection tokens are only useful with components. The Angular style guide suggests that you name components using the "Component" suffix. The example "LibHeaderComponent" follows this convention. You should maintain the relationship between the component and its token while still distinguishing between them. The recommended style is to use the component base name with the suffix "`Token`" to name your lightweight injection tokens: "`LibHeaderToken`." Last reviewed on Mon Feb 28 2022
programming_docs
angular Lazy-loading feature modules Lazy-loading feature modules ============================ By default, NgModules are eagerly loaded. This means that as soon as the application loads, so do all the NgModules, whether they are immediately necessary or not. For large applications with lots of routes, consider lazy loading —a design pattern that loads NgModules as needed. Lazy loading helps keep initial bundle sizes smaller, which in turn helps decrease load times. > For the final sample application with two lazy-loaded modules that this page describes, see the live example. > > Lazy loading basics ------------------- This section introduces the basic procedure for configuring a lazy-loaded route. For a step-by-step example, see the [step-by-step setup](lazy-loading-ngmodules#step-by-step) section on this page. To lazy load Angular modules, use `loadChildren` (instead of `component`) in your `AppRoutingModule` `routes` configuration as follows. ``` const routes: Routes = [ { path: 'items', loadChildren: () => import('./items/items.module').then(m => m.ItemsModule) } ]; ``` In the lazy-loaded module's routing module, add a route for the component. ``` const routes: Routes = [ { path: '', component: ItemsComponent } ]; ``` Also be sure to remove the `ItemsModule` from the `AppModule`. For step-by-step instructions on lazy loading modules, continue with the following sections of this page. Step-by-step setup ------------------ Setting up a lazy-loaded feature module requires two main steps: 1. Create the feature module with the Angular CLI, using the `--route` flag. 2. Configure the routes. ### Set up an application If you don't already have an application, follow the following steps to create one with the Angular CLI. If you already have an application, skip to [Configure the routes](lazy-loading-ngmodules#config-routes). Enter the following command where `customer-app` is the name of your app: ``` ng new customer-app --routing ``` This creates an application called `customer-app` and the `--routing` flag generates a file called `app-routing.module.ts`. This is one of the files you need for setting up lazy loading for your feature module. Navigate into the project by issuing the command `cd customer-app`. > The `--routing` option requires Angular CLI version 8.1 or higher. See [Keeping Up to Date](updating). > > ### Create a feature module with routing Next, you need a feature module with a component to route to. To make one, enter the following command in the command line tool, where `customers` is the name of the feature module. The path for loading the `customers` feature modules is also `customers` because it is specified with the `--route` option: ``` ng generate module customers --route customers --module app.module ``` This creates a `customers` directory having the new lazy-loadable feature module `CustomersModule` defined in the `customers.module.ts` file and the routing module `CustomersRoutingModule` defined in the `customers-routing.module.ts` file. The command automatically declares the `CustomersComponent` and imports `CustomersRoutingModule` inside the new feature module. Because the new module is meant to be lazy-loaded, the command does **not** add a reference to it in the application's root module file, `app.module.ts`. Instead, it adds the declared route, `customers` to the `routes` array declared in the module provided as the `--module` option. ``` const routes: Routes = [ { path: 'customers', loadChildren: () => import('./customers/customers.module').then(m => m.CustomersModule) } ]; ``` Notice that the lazy-loading syntax uses `loadChildren` followed by a function that uses the browser's built-in `import('...')` syntax for dynamic imports. The import path is the relative path to the module. In Angular version 8, the string syntax for the `loadChildren` route specification [was deprecated](deprecations#loadchildren-string-syntax) in favor of the `import()` syntax. You can opt into using string-based lazy loading (`loadChildren: './path/to/module#Module'`) by including the lazy-loaded routes in your `tsconfig` file, which includes the lazy-loaded files in the compilation. By default the Angular CLI generates projects with stricter file inclusions intended to be used with the `import()` syntax. ### Add another feature module Use the same command to create a second lazy-loaded feature module with routing, along with its stub component. ``` ng generate module orders --route orders --module app.module ``` This creates a new directory called `orders` containing the `OrdersModule` and `OrdersRoutingModule`, along with the new `OrdersComponent` source files. The `orders` route, specified with the `--route` option, is added to the `routes` array inside the `app-routing.module.ts` file, using the lazy-loading syntax. ``` const routes: Routes = [ { path: 'customers', loadChildren: () => import('./customers/customers.module').then(m => m.CustomersModule) }, { path: 'orders', loadChildren: () => import('./orders/orders.module').then(m => m.OrdersModule) } ]; ``` ### Set up the UI Though you can type the URL into the address bar, a navigation UI is straightforward for the user and more common. Replace the default placeholder markup in `app.component.html` with a custom nav, so you can navigate to your modules in the browser: ``` <h1> {{title}} </h1> <button type="button" routerLink="/customers">Customers</button> <button type="button" routerLink="/orders">Orders</button> <button type="button" routerLink="">Home</button> <router-outlet></router-outlet> ``` To see your application in the browser so far, enter the following command in the command line tool window: ``` ng serve ``` Then go to `localhost:4200` where you should see "customer-app" and three buttons. These buttons work, because the Angular CLI automatically added the routes to the feature modules to the `routes` array in `app-routing.module.ts`. ### Imports and route configuration The Angular CLI automatically added each feature module to the routes map at the application level. Finish this off by adding the default route. In the `app-routing.module.ts` file, update the `routes` array with the following: ``` const routes: Routes = [ { path: 'customers', loadChildren: () => import('./customers/customers.module').then(m => m.CustomersModule) }, { path: 'orders', loadChildren: () => import('./orders/orders.module').then(m => m.OrdersModule) }, { path: '', redirectTo: '', pathMatch: 'full' } ]; ``` The first two paths are the routes to the `CustomersModule` and the `OrdersModule`. The final entry defines a default route. The empty path matches everything that doesn't match an earlier path. ### Inside the feature module Next, take a look at the `customers.module.ts` file. If you're using the Angular CLI and following the steps outlined in this page, you don't have to do anything here. ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { CustomersRoutingModule } from './customers-routing.module'; import { CustomersComponent } from './customers.component'; @NgModule({ imports: [ CommonModule, CustomersRoutingModule ], declarations: [CustomersComponent] }) export class CustomersModule { } ``` The `customers.module.ts` file imports the `customers-routing.module.ts` and `customers.component.ts` files. `CustomersRoutingModule` is listed in the `@[NgModule](../api/core/ngmodule)` `imports` array giving `CustomersModule` access to its own routing module. `CustomersComponent` is in the `declarations` array, which means `CustomersComponent` belongs to the `CustomersModule`. The `app-routing.module.ts` then imports the feature module, `customers.module.ts` using JavaScript's dynamic import. The feature-specific route definition file `customers-routing.module.ts` imports its own feature component defined in the `customers.component.ts` file, along with the other JavaScript import statements. It then maps the empty path to the `CustomersComponent`. ``` import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { CustomersComponent } from './customers.component'; const routes: Routes = [ { path: '', component: CustomersComponent } ]; @NgModule({ imports: [RouterModule.forChild(routes)], exports: [RouterModule] }) export class CustomersRoutingModule { } ``` The `path` here is set to an empty string because the path in `AppRoutingModule` is already set to `customers`, so this route in the `CustomersRoutingModule`, is already within the `customers` context. Every route in this routing module is a child route. The other feature module's routing module is configured similarly. ``` import { OrdersComponent } from './orders.component'; const routes: Routes = [ { path: '', component: OrdersComponent } ]; ``` ### Verify lazy loading You can verify that a module is indeed being lazy loaded with the Chrome developer tools. In Chrome, open the developer tools by pressing `Cmd+Option+i` on a Mac or `Ctrl+Shift+j` on a PC and go to the Network Tab. Click on the Orders or Customers button. If you see a chunk appear, everything is wired up properly and the feature module is being lazy loaded. A chunk should appear for Orders and for Customers but only appears once for each. To see it again, or to test after making changes, click the circle with a line through it in the upper left of the Network Tab: Then reload with `Cmd+r` or `Ctrl+r`, depending on your platform. `forRoot()` and `forChild()` ----------------------------- You might have noticed that the Angular CLI adds `RouterModule.forRoot(routes)` to the `AppRoutingModule` `imports` array. This lets Angular know that the `AppRoutingModule` is a routing module and `forRoot()` specifies that this is the root routing module. It configures all the routes you pass to it, gives you access to the router directives, and registers the `[Router](../api/router/router)` service. Use `forRoot()` only once in the application, inside the `AppRoutingModule`. The Angular CLI also adds `RouterModule.forChild(routes)` to feature routing modules. This way, Angular knows that the route list is only responsible for providing extra routes and is intended for feature modules. You can use `forChild()` in multiple modules. The `forRoot()` method takes care of the *global* injector configuration for the Router. The `forChild()` method has no injector configuration. It uses directives such as `[RouterOutlet](../api/router/routeroutlet)` and `[RouterLink](../api/router/routerlink)`. For more information, see the [`forRoot()` pattern](singleton-services#forRoot) section of the [Singleton Services](singleton-services) guide. Preloading ---------- Preloading improves UX by loading parts of your application in the background. You can preload modules or component data. ### Preloading modules Preloading modules improves UX by loading parts of your application in the background. By doing this, users don't have to wait for the elements to download when they activate a route. To enable preloading of all lazy loaded modules, import the `[PreloadAllModules](../api/router/preloadallmodules)` token from the Angular `router`. ``` import { PreloadAllModules } from '@angular/router'; ``` Still in the `AppRoutingModule`, specify your preloading strategy in `forRoot()`. ``` RouterModule.forRoot( appRoutes, { preloadingStrategy: PreloadAllModules } ) ``` ### Preloading component data To preload component data, use a `resolver`. Resolvers improve UX by blocking the page load until all necessary data is available to fully display the page. #### Resolvers Create a resolver service. With the Angular CLI, the command to create a service is as follows: ``` ng generate service <service-name> ``` In the newly created service, implement the `[Resolve](../api/router/resolve)` interface provided by the `&commat;angular/router` package: ``` import { Resolve } from '@angular/router'; … /* An interface that represents your data model */ export interface Crisis { id: number; name: string; } export class CrisisDetailResolverService implements Resolve<Crisis> { resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<Crisis> { // your logic goes here } } ``` Import this resolver into your module's routing module. ``` import { CrisisDetailResolverService } from './crisis-detail-resolver.service'; ``` Add a `resolve` object to the component's `route` configuration. ``` { path: '/your-path', component: YourComponent, resolve: { crisis: CrisisDetailResolverService } } ``` In the component's constructor, inject an instance of the `[ActivatedRoute](../api/router/activatedroute)` class that represents the current route. ``` import { ActivatedRoute } from '@angular/router'; @Component({ … }) class YourComponent { constructor(private route: ActivatedRoute) {} } ``` Use the injected instance of the `[ActivatedRoute](../api/router/activatedroute)` class to access `data` associated with a given route. ``` import { ActivatedRoute } from '@angular/router'; @Component({ … }) class YourComponent { constructor(private route: ActivatedRoute) {} ngOnInit() { this.route.data .subscribe(data => { const crisis: Crisis = data.crisis; // … }); } } ``` For more information with a working example, see the [routing tutorial section on preloading](router-tutorial-toh#preloading-background-loading-of-feature-areas). Troubleshooting lazy-loading modules ------------------------------------ A common error when lazy-loading modules is importing common modules in multiple places within an application. Test for this condition by first generating the module using the Angular CLI and including the `--route route-name` parameter, where `route-name` is the name of your module. Next, create the module without the `--route` parameter. If `ng generate module` with the `--route` parameter returns an error, but runs correctly without it, you might have imported the same module in multiple places. Remember, many common Angular modules should be imported at the base of your application. For more information on Angular Modules, see [NgModules](ngmodules). More on NgModules and routing ----------------------------- You might also be interested in the following: * [Routing and Navigation](router) * [Providers](providers) * [Types of Feature Modules](module-types) * [Route-level code-splitting in Angular](https://web.dev/route-level-code-splitting-in-angular) * [Route preloading strategies in Angular](https://web.dev/route-preloading-in-angular) Last reviewed on Mon Feb 28 2022 angular Debugging tests Debugging tests =============== If your tests aren't working as you expect them to, you can inspect and debug them in the browser. Debug specs in the browser in the same way that you debug an application. 1. Reveal the Karma browser window. See [Set up testing](testing#set-up-testing) if you need help with this step. 2. Click the **DEBUG** button to open a new browser tab and re-run the tests. 3. Open the browser's **Developer Tools**. On Windows, press `Ctrl-Shift-I`. On macOS, press `Command-Option-I`. 4. Pick the **Sources** section. 5. Press `Control/Command-P`, and then start typing the name of your test file to open it. 6. Set a breakpoint in the test. 7. Refresh the browser, and notice how it stops at the breakpoint. Last reviewed on Mon Feb 28 2022 angular Resolving zone pollution Resolving zone pollution ======================== **Zone.js** is a signaling mechanism that Angular uses to detect when an application state might have changed. It captures asynchronous operations like `setTimeout`, network requests, and event listeners. Angular schedules change detection based on signals from Zone.js In some cases scheduled [tasks](https://developer.mozilla.org/en-US/docs/Web/API/HTML_DOM_API/Microtask_guide#tasks) or [microtasks](https://developer.mozilla.org/en-US/docs/Web/API/HTML_DOM_API/Microtask_guide#microtasks) don’t make any changes in the data model, which makes running change detection unnecessary. Common examples are: * `requestAnimationFrame`, `setTimeout` or `setInterval` * Task or microtask scheduling by third-party libraries This section covers how to identify such conditions, and how to run code outside the Angular zone to avoid unnecessary change detection calls. Identifying unnecessary change detection calls ---------------------------------------------- You can detect unnecessary change detection calls using Angular DevTools. Often they appear as consecutive bars in the profiler’s timeline with source `setTimeout`, `setInterval`, `requestAnimationFrame`, or an event handler. When you have limited calls within your application of these APIs, the change detection invocation is usually caused by a third-party library. In the image above, there is a series of change detection calls triggered by event handlers associated with an element. That’s a common challenge when using third-party, non-native Angular components, which do not alter the default behavior of `[NgZone](../api/core/ngzone)`. Run tasks outside `[NgZone](../api/core/ngzone)` ------------------------------------------------ In such cases, you can instruct Angular to avoid calling change detection for tasks scheduled by a given piece of code using [NgZone](zone). ``` import { Component, NgZone, OnInit } from '@angular/core'; @Component(...) class AppComponent implements OnInit { constructor(private ngZone: NgZone) {} ngOnInit() { this.ngZone.runOutsideAngular(() => setInterval(pollForUpdates), 500); } } ``` The preceding snippet instructs Angular to call `setInterval` outside the Angular Zone and skip running change detection after `pollForUpdates` runs. Third-party libraries commonly trigger unnecessary change detection cycles because they weren't authored with Zone.js in mind. Avoid these extra cycles by calling library APIs outside the Angular zone: ``` import { Component, NgZone, OnInit } from '@angular/core'; import * as Plotly from 'plotly.js-dist-min'; @Component(...) class AppComponent implements OnInit { constructor(private ngZone: NgZone) {} ngOnInit() { this.ngZone.runOutsideAngular(() => { Plotly.newPlot('chart', data); }); } } ``` Running `Plotly.newPlot('chart', data);` within `runOutsideAngular` instructs the framework that it shouldn’t run change detection after the execution of tasks scheduled by the initialization logic. For example, if `Plotly.newPlot('chart', data)` adds event listeners to a DOM element, Angular does not run change detection after the execution of their handlers. Last reviewed on Wed May 04 2022 angular Service worker in production Service worker in production ============================ This page is a reference for deploying and supporting production applications that use the Angular service worker. It explains how the Angular service worker fits into the larger production environment, the service worker's behavior under various conditions, and available resources and fail-safes. Prerequisites ------------- A basic understanding of the following: * [Service Worker Communication](service-worker-communications) Service worker and caching of application resources --------------------------------------------------- Imagine the Angular service worker as a forward cache or a Content Delivery Network (CDN) edge that is installed in the end user's web browser. The service worker responds to requests made by the Angular application for resources or data from a local cache, without needing to wait for the network. Like any cache, it has rules for how content is expired and updated. ### Application versions In the context of an Angular service worker, a "version" is a collection of resources that represent a specific build of the Angular application. Whenever a new build of the application is deployed, the service worker treats that build as a new version of the application. This is true even if only a single file is updated. At any given time, the service worker might have multiple versions of the application in its cache and it might be serving them simultaneously. For more information, see the [Application tabs](service-worker-devops#tabs) section. To preserve application integrity, the Angular service worker groups all files into a version together. The files grouped into a version usually include HTML, JS, and CSS files. Grouping of these files is essential for integrity because HTML, JS, and CSS files frequently refer to each other and depend on specific content. For example, an `index.html` file might have a `<script>` tag that references `bundle.js` and it might attempt to call a function `startApp()` from within that script. Any time this version of `index.html` is served, the corresponding `bundle.js` must be served with it. For example, assume that the `startApp()` function is renamed to `runApp()` in both files. In this scenario, it is not valid to serve the old `index.html`, which calls `startApp()`, along with the new bundle, which defines `runApp()`. This file integrity is especially important when lazy loading modules. A JS bundle might reference many lazy chunks, and the filenames of the lazy chunks are unique to the particular build of the application. If a running application at version `X` attempts to load a lazy chunk, but the server has already updated to version `X + 1`, the lazy loading operation fails. The version identifier of the application is determined by the contents of all resources, and it changes if any of them change. In practice, the version is determined by the contents of the `ngsw.json` file, which includes hashes for all known content. If any of the cached files change, the file's hash changes in `ngsw.json`. This change causes the Angular service worker to treat the active set of files as a new version. > The build process creates the manifest file, `ngsw.json`, using information from `ngsw-config.json`. > > With the versioning behavior of the Angular service worker, an application server can ensure that the Angular application always has a consistent set of files. #### Update checks Every time the user opens or refreshes the application, the Angular service worker checks for updates to the application by looking for updates to the `ngsw.json` manifest. If an update is found, it is downloaded and cached automatically, and is served the next time the application is loaded. ### Resource integrity One of the potential side effects of long caching is inadvertently caching a resource that's not valid. In a normal HTTP cache, a hard refresh or the cache expiring limits the negative effects of caching a file that's not valid. A service worker ignores such constraints and effectively long-caches the entire application. It's important that the service worker gets the correct content, so it keeps hashes of the resources to maintain their integrity. #### Hashed content To ensure resource integrity, the Angular service worker validates the hashes of all resources for which it has a hash. For an application created with the [Angular CLI](cli), this is everything in the `dist` directory covered by the user's `src/ngsw-config.json` configuration. If a particular file fails validation, the Angular service worker attempts to re-fetch the content using a "cache-busting" URL parameter to prevent browser or intermediate caching. If that content also fails validation, the service worker considers the entire version of the application to not be valid and stops serving the application. If necessary, the service worker enters a safe mode where requests fall back on the network. The service worker doesn't use its cache if there's a high risk of serving content that is broken, outdated, or not valid. Hash mismatches can occur for a variety of reasons: * Caching layers between the origin server and the end user could serve stale content * A non-atomic deployment could result in the Angular service worker having visibility of partially updated content * Errors during the build process could result in updated resources without `ngsw.json` being updated. The reverse could also happen resulting in an updated `ngsw.json` without updated resources. #### Unhashed content The only resources that have hashes in the `ngsw.json` manifest are resources that were present in the `dist` directory at the time the manifest was built. Other resources, especially those loaded from CDNs, have content that is unknown at build time or are updated more frequently than the application is deployed. If the Angular service worker does not have a hash to verify a resource is valid, it still caches its contents. At the same time, it honors the HTTP caching headers by using a policy of *stale while revalidate*. The Angular service worker continues to serve a resource even after its HTTP caching headers indicate that it is no longer valid. At the same time, it attempts to refresh the expired resource in the background. This way, broken unhashed resources do not remain in the cache beyond their configured lifetimes. ### Application tabs It can be problematic for an application if the version of resources it's receiving changes suddenly or without warning. See the [Application versions](service-worker-devops#versions) section for a description of such issues. The Angular service worker provides a guarantee: a running application continues to run the same version of the application. If another instance of the application is opened in a new web browser tab, then the most current version of the application is served. As a result, that new tab can be running a different version of the application than the original tab. > **IMPORTANT**: This guarantee is **stronger** than that provided by the normal web deployment model. Without a service worker, there is no guarantee that lazily loaded code is from the same version as the application's initial code. > > The Angular service worker might change the version of a running application under error conditions such as: * The current version becomes non-valid due to a failed hash * An unrelated error causes the service worker to enter safe mode and deactivates it temporarily The Angular service worker cleans up application versions when no tab is using them. Other reasons the Angular service worker might change the version of a running application are normal events: * The page is reloaded/refreshed * The page requests an update be immediately activated using the `[SwUpdate](../api/service-worker/swupdate)` service ### Service worker updates The Angular service worker is a small script that runs in web browsers. From time to time, the service worker is updated with bug fixes and feature improvements. The Angular service worker is downloaded when the application is first opened and when the application is accessed after a period of inactivity. If the service worker changes, it's updated in the background. Most updates to the Angular service worker are transparent to the application. The old caches are still valid and content is still served normally. Occasionally, a bug fix or feature in the Angular service worker might require the invalidation of old caches. In this case, the service worker transparently refreshes the application from the network. ### Bypassing the service worker In some cases, you might want to bypass the service worker entirely and let the browser handle the request. An example is when you rely on a feature that is currently not supported in service workers, such as [reporting progress on uploaded files](https://github.com/w3c/ServiceWorker/issues/1141). To bypass the service worker, set `ngsw-bypass` as a request header, or as a query parameter. The value of the header or query parameter is ignored and can be empty or omitted. ### Service worker requests when the server can't be reached The service worker processes all requests unless the [service worker is explicitly bypassed](service-worker-devops#bypassing-the-service-worker). The service worker either returns a cached response or sends the request to the server, depending on the state and configuration of the cache. The service worker only caches responses to non-mutating requests, such as `GET` and `HEAD`. If the service worker receives an error from the server or it doesn't receive a response, it returns an error status that indicates the result of the call. For example, if the service worker doesn't receive a response, it creates a [504 Gateway Timeout](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504) status to return. The `504` status in this example could be returned because the server is offline or the client is disconnected. Debugging the Angular service worker ------------------------------------ Occasionally, it might be necessary to examine the Angular service worker in a running state to investigate issues or whether it's operating as designed. Browsers provide built-in tools for debugging service workers and the Angular service worker itself includes useful debugging features. ### Locating and analyzing debugging information The Angular service worker exposes debugging information under the `ngsw/` virtual directory. Currently, the single exposed URL is `ngsw/state`. Here is an example of this debug page's contents: ``` NGSW Debug Info: Driver version: 13.3.7 Driver state: NORMAL ((nominal)) Latest manifest hash: eea7f5f464f90789b621170af5a569d6be077e5c Last update check: never === Version eea7f5f464f90789b621170af5a569d6be077e5c === Clients: 7b79a015-69af-4d3d-9ae6-95ba90c79486, 5bc08295-aaf2-42f3-a4cc-9e4ef9100f65 === Idle Task Queue === Last update tick: 1s496u Last update run: never Task queue: * init post-load (update, cleanup) Debug log: ``` #### Driver state The first line indicates the driver state: ``` Driver state: NORMAL ((nominal)) ``` `NORMAL` indicates that the service worker is operating normally and is not in a degraded state. There are two possible degraded states: | Degraded states | Details | | --- | --- | | `EXISTING_CLIENTS_ONLY` | The service worker does not have a clean copy of the latest known version of the application. Older cached versions are safe to use, so existing tabs continue to run from cache, but new loads of the application will be served from the network. The service worker will try to recover from this state when a new version of the application is detected and installed. This happens when a new `ngsw.json` is available. | | `SAFE_MODE` | The service worker cannot guarantee the safety of using cached data. Either an unexpected error occurred or all cached versions are invalid. All traffic will be served from the network, running as little service worker code as possible. | In both cases, the parenthetical annotation provides the error that caused the service worker to enter the degraded state. Both states are temporary; they are saved only for the lifetime of the [ServiceWorker instance](https://developer.mozilla.org/docs/Web/API/ServiceWorkerGlobalScope). The browser sometimes terminates an idle service worker to conserve memory and processor power, and creates a new service worker instance in response to network events. The new instance starts in the `NORMAL` mode, regardless of the state of the previous instance. #### Latest manifest hash ``` Latest manifest hash: eea7f5f464f90789b621170af5a569d6be077e5c ``` This is the SHA1 hash of the most up-to-date version of the application that the service worker knows about. #### Last update check ``` Last update check: never ``` This indicates the last time the service worker checked for a new version, or update, of the application. `never` indicates that the service worker has never checked for an update. In this example debug file, the update check is currently scheduled, as explained the next section. #### Version ``` === Version eea7f5f464f90789b621170af5a569d6be077e5c === Clients: 7b79a015-69af-4d3d-9ae6-95ba90c79486, 5bc08295-aaf2-42f3-a4cc-9e4ef9100f65 ``` In this example, the service worker has one version of the application cached and being used to serve two different tabs. > **NOTE**: This version hash is the "latest manifest hash" listed above. Both clients are on the latest version. Each client is listed by its ID from the `Clients` API in the browser. > > #### Idle task queue ``` === Idle Task Queue === Last update tick: 1s496u Last update run: never Task queue: * init post-load (update, cleanup) ``` The Idle Task Queue is the queue of all pending tasks that happen in the background in the service worker. If there are any tasks in the queue, they are listed with a description. In this example, the service worker has one such task scheduled, a post-initialization operation involving an update check and cleanup of stale caches. The last update tick/run counters give the time since specific events happened related to the idle queue. The "Last update run" counter shows the last time idle tasks were actually executed. "Last update tick" shows the time since the last event after which the queue might be processed. #### Debug log ``` Debug log: ``` Errors that occur within the service worker are logged here. ### Developer tools Browsers such as Chrome provide developer tools for interacting with service workers. Such tools can be powerful when used properly, but there are a few things to keep in mind. * When using developer tools, the service worker is kept running in the background and never restarts. This can cause behavior with Dev Tools open to differ from behavior a user might experience. * If you look in the Cache Storage viewer, the cache is frequently out of date. Right-click the Cache Storage title and refresh the caches. * Stopping and starting the service worker in the Service Worker pane checks for updates Service worker safety --------------------- Bugs or broken configurations could cause the Angular service worker to act in unexpected ways. If this happens, the Angular service worker contains several failsafe mechanisms in case an administrator needs to deactivate the service worker quickly. ### Fail-safe To deactivate the service worker, rename the `ngsw.json` file or delete it. When the service worker's request for `ngsw.json` returns a `404`, then the service worker removes all its caches and de-registers itself, essentially self-destructing. ### Safety worker A small script, `safety-worker.js`, is also included in the `@angular/service-worker` NPM package. When loaded, it un-registers itself from the browser and removes the service worker caches. This script can be used as a last resort to get rid of unwanted service workers already installed on client pages. > **IMPORTANT**: You cannot register this worker directly, as old clients with cached state might not see a new `index.html` which installs the different worker script. > > Instead, you must serve the contents of `safety-worker.js` at the URL of the Service Worker script you are trying to unregister. You must continue to do so until you are certain all users have successfully unregistered the old worker. For most sites, this means that you should serve the safety worker at the old Service Worker URL forever. This script can be used to deactivate `@angular/service-worker` and remove the corresponding caches. It also removes any other Service Workers which might have been served in the past on your site. ### Changing your application's location > **IMPORTANT**: Service workers don't work behind redirect. You might have already encountered the error `The script resource is behind a redirect, which is disallowed`. > > This can be a problem if you have to change your application's location. If you set up a redirect from the old location, such as `example.com`, to the new location, `www.example.com` in this example, the worker stops working. Also, the redirect won't even trigger for users who are loading the site entirely from Service Worker. The old worker, which was registered at `example.com`, tries to update and sends a request to the old location `example.com`. This request is redirected to the new location `www.example.com` and creates the error: `The script resource is behind a redirect, which is disallowed`. To remedy this, you might need to deactivate the old worker using one of the preceding techniques: [Fail-safe](service-worker-devops#fail-safe) or [Safety Worker](service-worker-devops#safety-worker). More on Angular service workers ------------------------------- You might also be interested in the following: * [Service Worker Configuration](service-worker-config) Last reviewed on Mon Feb 28 2022
programming_docs
angular Communicating with backend services using HTTP Communicating with backend services using HTTP ============================================== Most front-end applications need to communicate with a server over the HTTP protocol, to download or upload data and access other back-end services. Angular provides a client HTTP API for Angular applications, the `[HttpClient](../api/common/http/httpclient)` service class in `@angular/common/[http](../api/common/http)`. The HTTP client service offers the following major features. * The ability to request [typed response objects](http#typed-response) * Streamlined [error handling](http#error-handling) * [Testability](http#testing-requests) features * Request and response [interception](http#intercepting-requests-and-responses) Prerequisites ------------- Before working with the `[HttpClientModule](../api/common/http/httpclientmodule)`, you should have a basic understanding of the following: * TypeScript programming * Usage of the HTTP protocol * Angular application-design fundamentals, as described in [Angular Concepts](architecture) * Observable techniques and operators. See the [Observables](observables) guide. Setup for server communication ------------------------------ Before you can use `[HttpClient](../api/common/http/httpclient)`, you need to import the Angular `[HttpClientModule](../api/common/http/httpclientmodule)`. Most apps do so in the root `AppModule`. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { HttpClientModule } from '@angular/common/http'; @NgModule({ imports: [ BrowserModule, // import HttpClientModule after BrowserModule. HttpClientModule, ], declarations: [ AppComponent, ], bootstrap: [ AppComponent ] }) export class AppModule {} ``` You can then inject the `[HttpClient](../api/common/http/httpclient)` service as a dependency of an application class, as shown in the following `ConfigService` example. ``` import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable() export class ConfigService { constructor(private http: HttpClient) { } } ``` The `[HttpClient](../api/common/http/httpclient)` service makes use of [observables](glossary#observable "Observable definition") for all transactions. You must import the RxJS observable and operator symbols that appear in the example snippets. These `ConfigService` imports are typical. ``` import { Observable, throwError } from 'rxjs'; import { catchError, retry } from 'rxjs/operators'; ``` > You can run the live example that accompanies this guide. > > The sample app does not require a data server. It relies on the [Angular *in-memory-web-api*](https://github.com/angular/angular/tree/main/packages/misc/angular-in-memory-web-api), which replaces the *HttpClient* module's `[HttpBackend](../api/common/http/httpbackend)`. The replacement service simulates the behavior of a REST-like backend. > > Look at the `AppModule` *imports* to see how it is configured. > > Requesting data from a server ----------------------------- Use the [`HttpClient.get()`](../api/common/http/httpclient#get) method to fetch data from a server. The asynchronous method sends an HTTP request, and returns an Observable that emits the requested data when the response is received. The return type varies based on the `observe` and `responseType` values that you pass to the call. The `get()` method takes two arguments; the endpoint URL from which to fetch, and an *options* object that is used to configure the request. ``` options: { headers?: HttpHeaders | {[header: string]: string | string[]}, observe?: 'body' | 'events' | 'response', params?: HttpParams|{[param: string]: string | number | boolean | ReadonlyArray<string | number | boolean>}, reportProgress?: boolean, responseType?: 'arraybuffer'|'blob'|'json'|'text', withCredentials?: boolean, } ``` Important options include the *observe* and *responseType* properties. * The *observe* option specifies how much of the response to return * The *responseType* option specifies the format in which to return data > Use the `options` object to configure various other aspects of an outgoing request. In [Adding headers](http#adding-headers), for example, the service set the default headers using the `headers` option property. > > Use the `params` property to configure a request with [HTTP URL parameters](http#url-params), and the `reportProgress` option to [listen for progress events](http#report-progress) when transferring large amounts of data. > > Applications often request JSON data from a server. In the `ConfigService` example, the app needs a configuration file on the server, `config.json`, that specifies resource URLs. ``` { "heroesUrl": "api/heroes", "textfile": "assets/textfile.txt", "date": "2020-01-29" } ``` To fetch this kind of data, the `get()` call needs the following options: `{observe: 'body', responseType: 'json'}`. These are the default values for those options, so the following examples do not pass the options object. Later sections show some of the additional option possibilities. The example conforms to the best practices for creating scalable solutions by defining a re-usable [injectable service](glossary#service "service definition") to perform the data-handling functionality. In addition to fetching data, the service can post-process the data, add error handling, and add retry logic. The `ConfigService` fetches this file using the `[HttpClient.get()](../api/common/http/httpclient#get)` method. ``` configUrl = 'assets/config.json'; getConfig() { return this.http.get<Config>(this.configUrl); } ``` The `ConfigComponent` injects the `ConfigService` and calls the `getConfig` service method. Because the service method returns an `Observable` of configuration data, the component *subscribes* to the method's return value. The subscription callback performs minimal post-processing. It copies the data fields into the component's `config` object, which is data-bound in the component template for display. ``` showConfig() { this.configService.getConfig() .subscribe((data: Config) => this.config = { heroesUrl: data.heroesUrl, textfile: data.textfile, date: data.date, }); } ``` ### Starting the request For all `[HttpClient](../api/common/http/httpclient)` methods, the method doesn't begin its HTTP request until you call `subscribe()` on the observable the method returns. This is true for *all* `[HttpClient](../api/common/http/httpclient)` *methods*. > You should always unsubscribe from an observable when a component is destroyed. > > All observables returned from `[HttpClient](../api/common/http/httpclient)` methods are *cold* by design. Execution of the HTTP request is *deferred*, letting you extend the observable with additional operations such as `tap` and `catchError` before anything actually happens. Calling `subscribe()` triggers execution of the observable and causes `[HttpClient](../api/common/http/httpclient)` to compose and send the HTTP request to the server. Think of these observables as *blueprints* for actual HTTP requests. > In fact, each `subscribe()` initiates a separate, independent execution of the observable. Subscribing twice results in two HTTP requests. > > > ``` > const req = http.get<Heroes>('/api/heroes'); > // 0 requests made - .subscribe() not called. > req.subscribe(); > // 1 request made. > req.subscribe(); > // 2 requests made. > ``` > ### Requesting a typed response Structure your `[HttpClient](../api/common/http/httpclient)` request to declare the type of the response object, to make consuming the output easier and more obvious. Specifying the response type acts as a type assertion at compile time. > Specifying the response type is a declaration to TypeScript that it should treat your response as being of the given type. This is a build-time check and doesn't guarantee that the server actually responds with an object of this type. It is up to the server to ensure that the type specified by the server API is returned. > > To specify the response object type, first define an interface with the required properties. Use an interface rather than a class, because the response is a plain object that cannot be automatically converted to an instance of a class. ``` export interface Config { heroesUrl: string; textfile: string; date: any; } ``` Next, specify that interface as the `[HttpClient.get()](../api/common/http/httpclient#get)` call's type parameter in the service. ``` getConfig() { // now returns an Observable of Config return this.http.get<Config>(this.configUrl); } ``` > When you pass an interface as a type parameter to the `[HttpClient.get()](../api/common/http/httpclient#get)` method, use the [RxJS `map` operator](rx-library#operators) to transform the response data as needed by the UI. You can then pass the transformed data to the [async pipe](../api/common/asyncpipe). > > The callback in the updated component method receives a typed data object, which is easier and safer to consume: ``` config: Config | undefined; showConfig() { this.configService.getConfig() // clone the data object, using its known Config shape .subscribe((data: Config) => this.config = { ...data }); } ``` To access properties that are defined in an interface, you must explicitly convert the plain object you get from the JSON to the required response type. For example, the following `subscribe` callback receives `data` as an Object, and then type-casts it in order to access the properties. ``` .subscribe(data => this.config = { heroesUrl: (data as any).heroesUrl, textfile: (data as any).textfile, }); ``` The types of the `observe` and `response` options are *string unions*, rather than plain strings. ``` options: { … observe?: 'body' | 'events' | 'response', … responseType?: 'arraybuffer'|'blob'|'json'|'text', … } ``` This can cause confusion. For example: ``` // this works client.get('/foo', {responseType: 'text'}) // but this does NOT work const options = { responseType: 'text', }; client.get('/foo', options) ``` In the second case, TypeScript infers the type of `options` to be `{responseType: string}`. The type is too wide to pass to `HttpClient.get` which is expecting the type of `responseType` to be one of the *specific* strings. `[HttpClient](../api/common/http/httpclient)` is typed explicitly this way so that the compiler can report the correct return type based on the options you provided. Use `as const` to let TypeScript know that you really do mean to use a constant string type: ``` const options = { responseType: 'text' as const, }; client.get('/foo', options); ``` ### Reading the full response In the previous example, the call to `[HttpClient.get()](../api/common/http/httpclient#get)` did not specify any options. By default, it returned the JSON data contained in the response body. You might need more information about the transaction than is contained in the response body. Sometimes servers return special headers or status codes to indicate certain conditions that are important to the application workflow. Tell `[HttpClient](../api/common/http/httpclient)` that you want the full response with the `observe` option of the `get()` method: ``` getConfigResponse(): Observable<HttpResponse<Config>> { return this.http.get<Config>( this.configUrl, { observe: 'response' }); } ``` Now `[HttpClient.get()](../api/common/http/httpclient#get)` returns an `Observable` of type `[HttpResponse](../api/common/http/httpresponse)` rather than just the JSON data contained in the body. The component's `showConfigResponse()` method displays the response headers as well as the configuration: ``` showConfigResponse() { this.configService.getConfigResponse() // resp is of type `HttpResponse<Config>` .subscribe(resp => { // display its headers const keys = resp.headers.keys(); this.headers = keys.map(key => `${key}: ${resp.headers.get(key)}`); // access the body directly, which is typed as `Config`. this.config = { ...resp.body! }; }); } ``` As you can see, the response object has a `body` property of the correct type. ### Making a JSONP request Apps can use the `[HttpClient](../api/common/http/httpclient)` to make [JSONP](https://en.wikipedia.org/wiki/JSONP) requests across domains when a server doesn't support [CORS protocol](https://developer.mozilla.org/docs/Web/HTTP/CORS). Angular JSONP requests return an `Observable`. Follow the pattern for subscribing to observables and use the RxJS `map` operator to transform the response before using the [async pipe](../api/common/asyncpipe) to manage the results. In Angular, use JSONP by including `[HttpClientJsonpModule](../api/common/http/httpclientjsonpmodule)` in the `[NgModule](../api/core/ngmodule)` imports. In the following example, the `searchHeroes()` method uses a JSONP request to query for heroes whose names contain the search term. ``` /* GET heroes whose name contains search term */ searchHeroes(term: string): Observable { term = term.trim(); const heroesURL = `${this.heroesURL}?${term}`; return this.http.jsonp(heroesUrl, 'callback').pipe( catchError(this.handleError('searchHeroes', [])) // then handle the error ); } ``` This request passes the `heroesURL` as the first parameter and the callback function name as the second parameter. The response is wrapped in the callback function, which takes the observables returned by the JSONP method and pipes them through to the error handler. ### Requesting non-JSON data Not all APIs return JSON data. In this next example, a `DownloaderService` method reads a text file from the server and logs the file contents, before returning those contents to the caller as an `Observable<string>`. ``` getTextFile(filename: string) { // The Observable returned by get() is of type Observable<string> // because a text response was specified. // There's no need to pass a <string> type parameter to get(). return this.http.get(filename, {responseType: 'text'}) .pipe( tap( // Log the result or error { next: (data) => this.log(filename, data), error: (error) => this.logError(filename, error) } ) ); } ``` `[HttpClient.get()](../api/common/http/httpclient#get)` returns a string rather than the default JSON because of the `responseType` option. The RxJS `tap` operator lets the code inspect both success and error values passing through the observable without disturbing them. A `download()` method in the `DownloaderComponent` initiates the request by subscribing to the service method. ``` download() { this.downloaderService.getTextFile('assets/textfile.txt') .subscribe(results => this.contents = results); } ``` Handling request errors ----------------------- If the request fails on the server, `[HttpClient](../api/common/http/httpclient)` returns an *error* object instead of a successful response. The same service that performs your server transactions should also perform error inspection, interpretation, and resolution. When an error occurs, you can obtain details of what failed in order to inform your user. In some cases, you might also automatically [retry the request](http#retry). ### Getting error details An app should give the user useful feedback when data access fails. A raw error object is not particularly useful as feedback. In addition to detecting that an error has occurred, you need to get error details and use those details to compose a user-friendly response. Two types of errors can occur. * The server backend might reject the request, returning an HTTP response with a status code such as 404 or 500. These are error *responses*. * Something could go wrong on the client-side such as a network error that prevents the request from completing successfully or an exception thrown in an RxJS operator. These errors have `status` set to `0` and the `error` property contains a `ProgressEvent` object, whose `type` might provide further information. `[HttpClient](../api/common/http/httpclient)` captures both kinds of errors in its `[HttpErrorResponse](../api/common/http/httperrorresponse)`. Inspect that response to identify the error's cause. The following example defines an error handler in the previously defined [ConfigService](http#config-service "ConfigService defined"). ``` private handleError(error: HttpErrorResponse) { if (error.status === 0) { // A client-side or network error occurred. Handle it accordingly. console.error('An error occurred:', error.error); } else { // The backend returned an unsuccessful response code. // The response body may contain clues as to what went wrong. console.error( `Backend returned code ${error.status}, body was: `, error.error); } // Return an observable with a user-facing error message. return throwError(() => new Error('Something bad happened; please try again later.')); } ``` The handler returns an RxJS `ErrorObservable` with a user-friendly error message. The following code updates the `getConfig()` method, using a [pipe](pipes "Pipes guide") to send all observables returned by the `[HttpClient.get()](../api/common/http/httpclient#get)` call to the error handler. ``` getConfig() { return this.http.get<Config>(this.configUrl) .pipe( catchError(this.handleError) ); } ``` ### Retrying a failed request Sometimes the error is transient and goes away automatically if you try again. For example, network interruptions are common in mobile scenarios, and trying again can produce a successful result. The [RxJS library](rx-library) offers several *retry* operators. For example, the `retry()` operator automatically re-subscribes to a failed `Observable` a specified number of times. *Re-subscribing* to the result of an `[HttpClient](../api/common/http/httpclient)` method call has the effect of reissuing the HTTP request. The following example shows how to pipe a failed request to the `retry()` operator before passing it to the error handler. ``` getConfig() { return this.http.get<Config>(this.configUrl) .pipe( retry(3), // retry a failed request up to 3 times catchError(this.handleError) // then handle the error ); } ``` Sending data to a server ------------------------ In addition to fetching data from a server, `[HttpClient](../api/common/http/httpclient)` supports other HTTP methods such as PUT, POST, and DELETE, which you can use to modify the remote data. The sample app for this guide includes an abridged version of the "Tour of Heroes" example that fetches heroes and enables users to add, delete, and update them. The following sections show examples of the data-update methods from the sample's `HeroesService`. ### Making a POST request Apps often send data to a server with a POST request when submitting a form. In the following example, the `HeroesService` makes an HTTP POST request when adding a hero to the database. ``` /** POST: add a new hero to the database */ addHero(hero: Hero): Observable<Hero> { return this.http.post<Hero>(this.heroesUrl, hero, httpOptions) .pipe( catchError(this.handleError('addHero', hero)) ); } ``` The `[HttpClient.post()](../api/common/http/httpclient#post)` method is similar to `get()` in that it has a type parameter, which you can use to specify that you expect the server to return data of a given type. The method takes a resource URL and two additional parameters: | Parameter | Details | | --- | --- | | body | The data to POST in the body of the request. | | options | An object containing method options which, in this case, [specify required headers](http#adding-headers). | The example catches errors as [described above](http#error-details). The `HeroesComponent` initiates the actual POST operation by subscribing to the `Observable` returned by this service method. ``` this.heroesService .addHero(newHero) .subscribe(hero => this.heroes.push(hero)); ``` When the server responds successfully with the newly added hero, the component adds that hero to the displayed `heroes` list. ### Making a DELETE request This application deletes a hero with the `HttpClient.delete` method by passing the hero's ID in the request URL. ``` /** DELETE: delete the hero from the server */ deleteHero(id: number): Observable<unknown> { const url = `${this.heroesUrl}/${id}`; // DELETE api/heroes/42 return this.http.delete(url, httpOptions) .pipe( catchError(this.handleError('deleteHero')) ); } ``` The `HeroesComponent` initiates the actual DELETE operation by subscribing to the `Observable` returned by this service method. ``` this.heroesService .deleteHero(hero.id) .subscribe(); ``` The component isn't expecting a result from the delete operation, so it subscribes without a callback. Even though you are not using the result, you still have to subscribe. Calling the `subscribe()` method *executes* the observable, which is what initiates the DELETE request. > You must call `subscribe()` or nothing happens. Just calling `HeroesService.deleteHero()` does not initiate the DELETE request. > > ``` // oops ... subscribe() is missing so nothing happens this.heroesService.deleteHero(hero.id); ``` ### Making a PUT request An app can send PUT requests using the HTTP client service. The following `HeroesService` example, like the POST example, replaces a resource with updated data. ``` /** PUT: update the hero on the server. Returns the updated hero upon success. */ updateHero(hero: Hero): Observable<Hero> { return this.http.put<Hero>(this.heroesUrl, hero, httpOptions) .pipe( catchError(this.handleError('updateHero', hero)) ); } ``` As for any of the HTTP methods that return an observable, the caller, `HeroesComponent.update()` [must `subscribe()`](http#always-subscribe "Why you must always subscribe.") to the observable returned from the `[HttpClient.put()](../api/common/http/httpclient#put)` in order to initiate the request. ### Adding and updating headers Many servers require extra headers for save operations. For example, a server might require an authorization token, or "Content-Type" header to explicitly declare the MIME type of the request body. ##### Adding headers The `HeroesService` defines such headers in an `httpOptions` object that are passed to every `[HttpClient](../api/common/http/httpclient)` save method. ``` import { HttpHeaders } from '@angular/common/http'; const httpOptions = { headers: new HttpHeaders({ 'Content-Type': 'application/json', Authorization: 'my-auth-token' }) }; ``` ##### Updating headers You can't directly modify the existing headers within the previous options object because instances of the `[HttpHeaders](../api/common/http/httpheaders)` class are immutable. Use the `set()` method instead, to return a clone of the current instance with the new changes applied. The following example shows how, when an old token expires, you can update the authorization header before making the next request. ``` httpOptions.headers = httpOptions.headers.set('Authorization', 'my-new-auth-token'); ``` Configuring HTTP URL parameters ------------------------------- Use the `[HttpParams](../api/common/http/httpparams)` class with the `params` request option to add URL query strings in your `[HttpRequest](../api/common/http/httprequest)`. The following example, the `searchHeroes()` method queries for heroes whose names contain the search term. Start by importing `[HttpParams](../api/common/http/httpparams)` class. ``` import {HttpParams} from "@angular/common/http"; ``` ``` /* GET heroes whose name contains search term */ searchHeroes(term: string): Observable<Hero[]> { term = term.trim(); // Add safe, URL encoded search parameter if there is a search term const options = term ? { params: new HttpParams().set('name', term) } : {}; return this.http.get<Hero[]>(this.heroesUrl, options) .pipe( catchError(this.handleError<Hero[]>('searchHeroes', [])) ); } ``` If there is a search term, the code constructs an options object with an HTML URL-encoded search parameter. If the term is "cat", for example, the GET request URL would be `api/heroes?name=cat`. The `[HttpParams](../api/common/http/httpparams)` object is immutable. If you need to update the options, save the returned value of the `.set()` method. You can also create HTTP parameters directly from a query string by using the `fromString` variable: ``` const params = new HttpParams({fromString: 'name=foo'}); ``` Intercepting requests and responses ----------------------------------- With interception, you declare *interceptors* that inspect and transform HTTP requests from your application to a server. The same interceptors can also inspect and transform a server's responses on their way back to the application. Multiple interceptors form a *forward-and-backward* chain of request/response handlers. Interceptors can perform a variety of *implicit* tasks, from authentication to logging, in a routine, standard way, for every HTTP request/response. Without interception, developers would have to implement these tasks *explicitly* for each `[HttpClient](../api/common/http/httpclient)` method call. ### Write an interceptor To implement an interceptor, declare a class that implements the `intercept()` method of the `[HttpInterceptor](../api/common/http/httpinterceptor)` interface. Here is a do-nothing `noop` interceptor that passes the request through without touching it: ``` import { Injectable } from '@angular/core'; import { HttpEvent, HttpInterceptor, HttpHandler, HttpRequest } from '@angular/common/http'; import { Observable } from 'rxjs'; /** Pass untouched request through to the next request handler. */ @Injectable() export class NoopInterceptor implements HttpInterceptor { intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { return next.handle(req); } } ``` The `intercept` method transforms a request into an `Observable` that eventually returns the HTTP response. In this sense, each interceptor is fully capable of handling the request entirely by itself. Most interceptors inspect the request on the way in and forward the potentially altered request to the `handle()` method of the `next` object which implements the [`HttpHandler`](../api/common/http/httphandler) interface. ``` export abstract class HttpHandler { abstract handle(req: HttpRequest<any>): Observable<HttpEvent<any>>; } ``` Like `intercept()`, the `handle()` method transforms an HTTP request into an `Observable` of [`HttpEvents`](http#interceptor-events) which ultimately include the server's response. The `intercept()` method could inspect that observable and alter it before returning it to the caller. This `no-op` interceptor calls `next.handle()` with the original request and returns the observable without doing a thing. ### The `next` object The `next` object represents the next interceptor in the chain of interceptors. The final `next` in the chain is the `[HttpClient](../api/common/http/httpclient)` backend handler that sends the request to the server and receives the server's response. Most interceptors call `next.handle()` so that the request flows through to the next interceptor and, eventually, the backend handler. An interceptor *could* skip calling `next.handle()`, short-circuit the chain, and [return its own `Observable`](http#caching) with an artificial server response. This is a common middleware pattern found in frameworks such as Express.js. ### Provide the interceptor The `NoopInterceptor` is a service managed by Angular's [dependency injection (DI)](dependency-injection) system. Like other services, you must provide the interceptor class before the app can use it. Because interceptors are optional dependencies of the `[HttpClient](../api/common/http/httpclient)` service, you must provide them in the same injector or a parent of the injector that provides `[HttpClient](../api/common/http/httpclient)`. Interceptors provided *after* DI creates the `[HttpClient](../api/common/http/httpclient)` are ignored. This app provides `[HttpClient](../api/common/http/httpclient)` in the app's root injector, as a side effect of importing the `[HttpClientModule](../api/common/http/httpclientmodule)` in `AppModule`. You should provide interceptors in `AppModule` as well. After importing the `[HTTP\_INTERCEPTORS](../api/common/http/http_interceptors)` injection token from `@angular/common/[http](../api/common/http)`, write the `NoopInterceptor` provider like this: ``` { provide: HTTP_INTERCEPTORS, useClass: NoopInterceptor, multi: true }, ``` Notice the `multi: true` option. This required setting tells Angular that `[HTTP\_INTERCEPTORS](../api/common/http/http_interceptors)` is a token for a *multiprovider* that injects an array of values, rather than a single value. You *could* add this provider directly to the providers array of the `AppModule`. However, it's rather verbose and there's a good chance that you'll create more interceptors and provide them in the same way. You must also pay [close attention to the order](http#interceptor-order) in which you provide these interceptors. Consider creating a "barrel" file that gathers all the interceptor providers into an `httpInterceptorProviders` array, starting with this first one, the `NoopInterceptor`. ``` /* "Barrel" of Http Interceptors */ import { HTTP_INTERCEPTORS } from '@angular/common/http'; import { NoopInterceptor } from './noop-interceptor'; /** Http interceptor providers in outside-in order */ export const httpInterceptorProviders = [ { provide: HTTP_INTERCEPTORS, useClass: NoopInterceptor, multi: true }, ]; ``` Then import and add it to the `AppModule` `providers array` like this: ``` providers: [ httpInterceptorProviders ], ``` As you create new interceptors, add them to the `httpInterceptorProviders` array and you won't have to revisit the `AppModule`. > There are many more interceptors in the complete sample code. > > ### Interceptor order Angular applies interceptors in the order that you provide them. For example, consider a situation in which you want to handle the authentication of your HTTP requests and log them before sending them to a server. To accomplish this task, you could provide an `AuthInterceptor` service and then a `LoggingInterceptor` service. Outgoing requests would flow from the `AuthInterceptor` to the `LoggingInterceptor`. Responses from these requests would flow in the other direction, from `LoggingInterceptor` back to `AuthInterceptor`. The following is a visual representation of the process: > The last interceptor in the process is always the `[HttpBackend](../api/common/http/httpbackend)` that handles communication with the server. > > You cannot change the order or remove interceptors later. If you need to enable and disable an interceptor dynamically, you'll have to build that capability into the interceptor itself. ### Handling interceptor events Most `[HttpClient](../api/common/http/httpclient)` methods return observables of `[HttpResponse](../api/common/http/httpresponse)<any>`. The `[HttpResponse](../api/common/http/httpresponse)` class itself is actually an event, whose type is `[HttpEventType.Response](../api/common/http/httpeventtype#Response)`. A single HTTP request can, however, generate multiple events of other types, including upload and download progress events. The methods `HttpInterceptor.intercept()` and `HttpHandler.handle()` return observables of `[HttpEvent](../api/common/http/httpevent)<any>`. Many interceptors are only concerned with the outgoing request and return the event stream from `next.handle()` without modifying it. Some interceptors, however, need to examine and modify the response from `next.handle()`; these operations can see all of these events in the stream. Although interceptors are capable of modifying requests and responses, the `[HttpRequest](../api/common/http/httprequest)` and `[HttpResponse](../api/common/http/httpresponse)` instance properties are `readonly`, rendering them largely immutable. They are immutable for a good reason: An app might retry a request several times before it succeeds, which means that the interceptor chain can re-process the same request multiple times. If an interceptor could modify the original request object, the re-tried operation would start from the modified request rather than the original. Immutability ensures that interceptors see the same request for each try. > Your interceptor should return every event without modification unless it has a compelling reason to do otherwise. > > TypeScript prevents you from setting `[HttpRequest](../api/common/http/httprequest)` read-only properties. ``` // Typescript disallows the following assignment because req.url is readonly req.url = req.url.replace('http://', 'https://'); ``` If you must alter a request, clone it first and modify the clone before passing it to `next.handle()`. You can clone and modify the request in a single step, as shown in the following example. ``` // clone request and replace 'http://' with 'https://' at the same time const secureReq = req.clone({ url: req.url.replace('http://', 'https://') }); // send the cloned, "secure" request to the next handler. return next.handle(secureReq); ``` The `clone()` method's hash argument lets you mutate specific properties of the request while copying the others. #### Modifying a request body The `readonly` assignment guard can't prevent deep updates and, in particular, it can't prevent you from modifying a property of a request body object. ``` req.body.name = req.body.name.trim(); // bad idea! ``` If you must modify the request body, follow these steps. 1. Copy the body and make your change in the copy. 2. Clone the request object, using its `clone()` method. 3. Replace the clone's body with the modified copy. ``` // copy the body and trim whitespace from the name property const newBody = { ...body, name: body.name.trim() }; // clone request and set its body const newReq = req.clone({ body: newBody }); // send the cloned request to the next handler. return next.handle(newReq); ``` #### Clearing the request body in a clone Sometimes you need to clear the request body rather than replace it. To do this, set the cloned request body to `null`. > **TIP**: If you set the cloned request body to `undefined`, Angular assumes you intend to leave the body as is. > > ``` newReq = req.clone({ … }); // body not mentioned => preserve original body newReq = req.clone({ body: undefined }); // preserve original body newReq = req.clone({ body: null }); // clear the body ``` Http interceptor use-cases -------------------------- Following are a number of common uses for interceptors. ### Setting default headers Apps often use an interceptor to set default headers on outgoing requests. The sample app has an `AuthService` that produces an authorization token. Here is its `AuthInterceptor` that injects that service to get the token and adds an authorization header with that token to every outgoing request: ``` import { AuthService } from '../auth.service'; @Injectable() export class AuthInterceptor implements HttpInterceptor { constructor(private auth: AuthService) {} intercept(req: HttpRequest<any>, next: HttpHandler) { // Get the auth token from the service. const authToken = this.auth.getAuthorizationToken(); // Clone the request and replace the original headers with // cloned headers, updated with the authorization. const authReq = req.clone({ headers: req.headers.set('Authorization', authToken) }); // send cloned request with header to the next handler. return next.handle(authReq); } } ``` The practice of cloning a request to set new headers is so common that there's a `setHeaders` shortcut for it: ``` // Clone the request and set the new header in one step. const authReq = req.clone({ setHeaders: { Authorization: authToken } }); ``` An interceptor that alters headers can be used for a number of different operations, including: * Authentication/authorization * Caching behavior; for example, `If-Modified-Since` * XSRF protection ### Logging request and response pairs Because interceptors can process the request and response *together*, they can perform tasks such as timing and logging an entire HTTP operation. Consider the following `LoggingInterceptor`, which captures the time of the request, the time of the response, and logs the outcome with the elapsed time with the injected `MessageService`. ``` import { finalize, tap } from 'rxjs/operators'; import { MessageService } from '../message.service'; @Injectable() export class LoggingInterceptor implements HttpInterceptor { constructor(private messenger: MessageService) {} intercept(req: HttpRequest<any>, next: HttpHandler) { const started = Date.now(); let ok: string; // extend server response observable with logging return next.handle(req) .pipe( tap({ // Succeeds when there is a response; ignore other events next: (event) => (ok = event instanceof HttpResponse ? 'succeeded' : ''), // Operation failed; error is an HttpErrorResponse error: (error) => (ok = 'failed') }), // Log when response observable either completes or errors finalize(() => { const elapsed = Date.now() - started; const msg = `${req.method} "${req.urlWithParams}" ${ok} in ${elapsed} ms.`; this.messenger.add(msg); }) ); } } ``` The RxJS `tap` operator captures whether the request succeeded or failed. The RxJS `finalize` operator is called when the response observable either returns an error or completes and reports the outcome to the `MessageService`. Neither `tap` nor `finalize` touch the values of the observable stream returned to the caller. ### Custom JSON parsing Interceptors can be used to replace the built-in JSON parsing with a custom implementation. The `CustomJsonInterceptor` in the following example demonstrates how to achieve this. If the intercepted request expects a `'json'` response, the `responseType` is changed to `'text'` to disable the built-in JSON parsing. Then the response is parsed via the injected `JsonParser`. ``` // The JsonParser class acts as a base class for custom parsers and as the DI token. @Injectable() export abstract class JsonParser { abstract parse(text: string): any; } @Injectable() export class CustomJsonInterceptor implements HttpInterceptor { constructor(private jsonParser: JsonParser) {} intercept(httpRequest: HttpRequest<any>, next: HttpHandler) { if (httpRequest.responseType === 'json') { // If the expected response type is JSON then handle it here. return this.handleJsonResponse(httpRequest, next); } else { return next.handle(httpRequest); } } private handleJsonResponse(httpRequest: HttpRequest<any>, next: HttpHandler) { // Override the responseType to disable the default JSON parsing. httpRequest = httpRequest.clone({responseType: 'text'}); // Handle the response using the custom parser. return next.handle(httpRequest).pipe(map(event => this.parseJsonResponse(event))); } private parseJsonResponse(event: HttpEvent<any>) { if (event instanceof HttpResponse && typeof event.body === 'string') { return event.clone({body: this.jsonParser.parse(event.body)}); } else { return event; } } } ``` You can then implement your own custom `JsonParser`. Here is a custom JsonParser that has a special date reviver. ``` @Injectable() export class CustomJsonParser implements JsonParser { parse(text: string): any { return JSON.parse(text, dateReviver); } } function dateReviver(key: string, value: any) { /* . . . */ } ``` You provide the `CustomParser` along with the `CustomJsonInterceptor`. ``` { provide: HTTP_INTERCEPTORS, useClass: CustomJsonInterceptor, multi: true }, { provide: JsonParser, useClass: CustomJsonParser }, ``` ### Caching requests Interceptors can handle requests by themselves, without forwarding to `next.handle()`. For example, you might decide to cache certain requests and responses to improve performance. You can delegate caching to an interceptor without disturbing your existing data services. The `CachingInterceptor` in the following example demonstrates this approach. ``` @Injectable() export class CachingInterceptor implements HttpInterceptor { constructor(private cache: RequestCache) {} intercept(req: HttpRequest<any>, next: HttpHandler) { // continue if not cacheable. if (!isCacheable(req)) { return next.handle(req); } const cachedResponse = this.cache.get(req); return cachedResponse ? of(cachedResponse) : sendRequest(req, next, this.cache); } } ``` * The `isCacheable()` function determines if the request is cacheable. In this sample, only GET requests to the package search API are cacheable. * If the request is not cacheable, the interceptor forwards the request to the next handler in the chain * If a cacheable request is found in the cache, the interceptor returns an `of()` *observable* with the cached response, by-passing the `next` handler and all other interceptors downstream * If a cacheable request is not in cache, the code calls `sendRequest()`. This function forwards the request to `next.handle()` which ultimately calls the server and returns the server's response. ``` /** * Get server response observable by sending request to `next()`. * Will add the response to the cache on the way out. */ function sendRequest( req: HttpRequest<any>, next: HttpHandler, cache: RequestCache): Observable<HttpEvent<any>> { return next.handle(req).pipe( tap(event => { // There may be other events besides the response. if (event instanceof HttpResponse) { cache.put(req, event); // Update the cache. } }) ); } ``` > Notice how `sendRequest()` intercepts the response on its way back to the application. This method pipes the response through the `tap()` operator, whose callback adds the response to the cache. > > The original response continues untouched back up through the chain of interceptors to the application caller. > > Data services, such as `PackageSearchService`, are unaware that some of their `[HttpClient](../api/common/http/httpclient)` requests actually return cached responses. > > ### Using interceptors to request multiple values The `[HttpClient.get()](../api/common/http/httpclient#get)` method normally returns an observable that emits a single value, either the data or an error. An interceptor can change this to an observable that emits [multiple values](observables). The following revised version of the `CachingInterceptor` optionally returns an observable that immediately emits the cached response, sends the request on to the package search API, and emits again later with the updated search results. ``` // cache-then-refresh if (req.headers.get('x-refresh')) { const results$ = sendRequest(req, next, this.cache); return cachedResponse ? results$.pipe( startWith(cachedResponse) ) : results$; } // cache-or-fetch return cachedResponse ? of(cachedResponse) : sendRequest(req, next, this.cache); ``` > The *cache-then-refresh* option is triggered by the presence of a custom `x-refresh` header. > > A checkbox on the `PackageSearchComponent` toggles a `withRefresh` flag, which is one of the arguments to `PackageSearchService.search()`. That `search()` method creates the custom `x-refresh` header and adds it to the request before calling `[HttpClient.get()](../api/common/http/httpclient#get)`. > > The revised `CachingInterceptor` sets up a server request whether there's a cached value or not, using the same `sendRequest()` method described [above](http#send-request). The `results$` observable makes the request when subscribed. * If there's no cached value, the interceptor returns `results$`. * If there is a cached value, the code *pipes* the cached response onto `results$`. This produces a recomposed observable that emits two responses, so subscribers will see a sequence of these two responses: * The cached response that's emitted immediately * The response from the server, that's emitted later Tracking and showing request progress ------------------------------------- Sometimes applications transfer large amounts of data and those transfers can take a long time. File uploads are a typical example. You can give the users a better experience by providing feedback on the progress of such transfers. To make a request with progress events enabled, create an instance of `[HttpRequest](../api/common/http/httprequest)` with the `reportProgress` option set true to enable tracking of progress events. ``` const req = new HttpRequest('POST', '/upload/file', file, { reportProgress: true }); ``` > **TIP**: Every progress event triggers change detection, so only turn them on if you need to report progress in the UI. > > When using [`HttpClient.request()`](../api/common/http/httpclient#request) with an HTTP method, configure the method with [`observe: 'events'`](../api/common/http/httpclient#request) to see all events, including the progress of transfers. > > Next, pass this request object to the `[HttpClient.request()](../api/common/http/httpclient#request)` method, which returns an `Observable` of `HttpEvents` (the same events processed by [interceptors](http#interceptor-events)). ``` // The `HttpClient.request` API produces a raw event stream // which includes start (sent), progress, and response events. return this.http.request(req).pipe( map(event => this.getEventMessage(event, file)), tap(message => this.showProgress(message)), last(), // return last (completed) message to caller catchError(this.handleError(file)) ); ``` The `getEventMessage` method interprets each type of `[HttpEvent](../api/common/http/httpevent)` in the event stream. ``` /** Return distinct message for sent, upload progress, & response events */ private getEventMessage(event: HttpEvent<any>, file: File) { switch (event.type) { case HttpEventType.Sent: return `Uploading file "${file.name}" of size ${file.size}.`; case HttpEventType.UploadProgress: // Compute and show the % done: const percentDone = event.total ? Math.round(100 * event.loaded / event.total) : 0; return `File "${file.name}" is ${percentDone}% uploaded.`; case HttpEventType.Response: return `File "${file.name}" was completely uploaded!`; default: return `File "${file.name}" surprising upload event: ${event.type}.`; } } ``` > The sample app for this guide doesn't have a server that accepts uploaded files. The `UploadInterceptor` in `app/http-interceptors/upload-interceptor.ts` intercepts and short-circuits upload requests by returning an observable of simulated events. > > Optimizing server interaction with debouncing --------------------------------------------- If you need to make an HTTP request in response to user input, it's not efficient to send a request for every keystroke. It's better to wait until the user stops typing and then send a request. This technique is known as debouncing. Consider the following template, which lets a user enter a search term to find a package by name. When the user enters a name in a search-box, the `PackageSearchComponent` sends a search request for a package with that name to the package search API. ``` <input type="text" (keyup)="search(getValue($event))" id="name" placeholder="Search"/> <ul> <li *ngFor="let package of packages$ | async"> <b>{{package.name}} v.{{package.version}}</b> - <i>{{package.description}}</i> </li> </ul> ``` Here, the `keyup` event binding sends every keystroke to the component's `search()` method. > The type of `$event.target` is only `EventTarget` in the template. In the `getValue()` method, the target is cast to an `HTMLInputElement` to let type-safe have access to its `value` property. > > > ``` > getValue(event: Event): string { > return (event.target as HTMLInputElement).value; > } > ``` > The following snippet implements debouncing for this input using RxJS operators. ``` withRefresh = false; packages$!: Observable<NpmPackageInfo[]>; private searchText$ = new Subject<string>(); search(packageName: string) { this.searchText$.next(packageName); } ngOnInit() { this.packages$ = this.searchText$.pipe( debounceTime(500), distinctUntilChanged(), switchMap(packageName => this.searchService.search(packageName, this.withRefresh)) ); } constructor(private searchService: PackageSearchService) { } ``` The `searchText$` is the sequence of search-box values coming from the user. It's defined as an RxJS `Subject`, which means it is a multicasting `Observable` that can also emit values for itself by calling `next(value)`, as happens in the `search()` method. Rather than forward every `searchText` value directly to the injected `PackageSearchService`, the code in `ngOnInit()` pipes search values through three operators, so that a search value reaches the service only if it's a new value and the user stopped typing. | RxJS operators | Details | | --- | --- | | `debounceTime(500)`⁠ | Wait for the user to stop typing, which is 1/2 second in this case. | | `distinctUntilChanged()` | Wait until the search text changes. | | `switchMap()`⁠ | Send the search request to the service. | The code sets `packages$` to this re-composed `Observable` of search results. The template subscribes to `packages$` with the [AsyncPipe](../api/common/asyncpipe) and displays search results as they arrive. > See [Using interceptors to request multiple values](http#cache-refresh) for more about the `withRefresh` option. > > ### Using the `switchMap()` operator The `switchMap()` operator takes a function argument that returns an `Observable`. In the example, `PackageSearchService.search` returns an `Observable`, as other data service methods do. If a previous search request is still in-flight, such as when the network connection is poor, the operator cancels that request and sends a new one. > **NOTE**: `switchMap()` returns service responses in their original request order, even if the server returns them out of order. > > > If you think you'll reuse this debouncing logic, consider moving it to a utility function or into the `PackageSearchService` itself. > > Security: XSRF protection ------------------------- [Cross-Site Request Forgery (XSRF or CSRF)](https://en.wikipedia.org/wiki/Cross-site_request_forgery) is an attack technique by which the attacker can trick an authenticated user into unknowingly executing actions on your website. `[HttpClient](../api/common/http/httpclient)` supports a [common mechanism](https://en.wikipedia.org/wiki/Cross-site_request_forgery#Cookie-to-header_token) used to prevent XSRF attacks. When performing HTTP requests, an interceptor reads a token from a cookie, by default `XSRF-TOKEN`, and sets it as an HTTP header, `X-XSRF-TOKEN`. Because only code that runs on your domain could read the cookie, the backend can be certain that the HTTP request came from your client application and not an attacker. By default, an interceptor sends this header on all mutating requests (such as POST) to relative URLs, but not on GET/HEAD requests or on requests with an absolute URL. To take advantage of this, your server needs to set a token in a JavaScript readable session cookie called `XSRF-TOKEN` on either the page load or the first GET request. On subsequent requests the server can verify that the cookie matches the `X-XSRF-TOKEN` HTTP header, and therefore be sure that only code running on your domain could have sent the request. The token must be unique for each user and must be verifiable by the server; this prevents the client from making up its own tokens. Set the token to a digest of your site's authentication cookie with a salt for added security. To prevent collisions in environments where multiple Angular apps share the same domain or subdomain, give each application a unique cookie name. > *`[HttpClient](../api/common/http/httpclient)` supports only the client half of the XSRF protection scheme.* Your backend service must be configured to set the cookie for your page, and to verify that the header is present on all eligible requests. Failing to do so renders Angular's default protection ineffective. > > ### Configuring custom cookie/header names If your backend service uses different names for the XSRF token cookie or header, use `[HttpClientXsrfModule.withOptions()](../api/common/http/httpclientxsrfmodule#withOptions)` to override the defaults. ``` imports: [ HttpClientModule, HttpClientXsrfModule.withOptions({ cookieName: 'My-Xsrf-Cookie', headerName: 'My-Xsrf-Header', }), ], ``` Testing HTTP requests --------------------- As for any external dependency, you must mock the HTTP backend so your tests can simulate interaction with a remote server. The `@angular/common/[http](../api/common/http)/testing` library makes it straightforward to set up such mocking. Angular's HTTP testing library is designed for a pattern of testing in which the app executes code and makes requests first. The test then expects that certain requests have or have not been made, performs assertions against those requests, and finally provides responses by "flushing" each expected request. At the end, tests can verify that the app made no unexpected requests. > You can run these sample tests in a live coding environment. > > The tests described in this guide are in `src/testing/http-client.spec.ts`. There are also tests of an application data service that call `[HttpClient](../api/common/http/httpclient)` in `src/app/heroes/heroes.service.spec.ts`. > > ### Setup for testing To begin testing calls to `[HttpClient](../api/common/http/httpclient)`, import the `[HttpClientTestingModule](../api/common/http/testing/httpclienttestingmodule)` and the mocking controller, `[HttpTestingController](../api/common/http/testing/httptestingcontroller)`, along with the other symbols your tests require. ``` // Http testing module and mocking controller import { HttpClientTestingModule, HttpTestingController } from '@angular/common/http/testing'; // Other imports import { TestBed } from '@angular/core/testing'; import { HttpClient, HttpErrorResponse } from '@angular/common/http'; ``` Then add the `[HttpClientTestingModule](../api/common/http/testing/httpclienttestingmodule)` to the `[TestBed](../api/core/testing/testbed)` and continue with the setup of the *service-under-test*. ``` describe('HttpClient testing', () => { let httpClient: HttpClient; let httpTestingController: HttpTestingController; beforeEach(() => { TestBed.configureTestingModule({ imports: [ HttpClientTestingModule ] }); // Inject the http service and test controller for each test httpClient = TestBed.inject(HttpClient); httpTestingController = TestBed.inject(HttpTestingController); }); /// Tests begin /// }); ``` Now requests made in the course of your tests hit the testing backend instead of the normal backend. This setup also calls `TestBed.inject()` to inject the `[HttpClient](../api/common/http/httpclient)` service and the mocking controller so they can be referenced during the tests. ### Expecting and answering requests Now you can write a test that expects a GET Request to occur and provides a mock response. ``` it('can test HttpClient.get', () => { const testData: Data = {name: 'Test Data'}; // Make an HTTP GET request httpClient.get<Data>(testUrl) .subscribe(data => // When observable resolves, result should match test data expect(data).toEqual(testData) ); // The following `expectOne()` will match the request's URL. // If no requests or multiple requests matched that URL // `expectOne()` would throw. const req = httpTestingController.expectOne('/data'); // Assert that the request is a GET. expect(req.request.method).toEqual('GET'); // Respond with mock data, causing Observable to resolve. // Subscribe callback asserts that correct data was returned. req.flush(testData); // Finally, assert that there are no outstanding requests. httpTestingController.verify(); }); ``` The last step, verifying that no requests remain outstanding, is common enough for you to move it into an `afterEach()` step: ``` afterEach(() => { // After every test, assert that there are no more pending requests. httpTestingController.verify(); }); ``` #### Custom request expectations If matching by URL isn't sufficient, it's possible to implement your own matching function. For example, you could look for an outgoing request that has an authorization header: ``` // Expect one request with an authorization header const req = httpTestingController.expectOne( request => request.headers.has('Authorization') ); ``` As with the previous `expectOne()`, the test fails if 0 or 2+ requests satisfy this predicate. #### Handling more than one request If you need to respond to duplicate requests in your test, use the `match()` API instead of `expectOne()`. It takes the same arguments but returns an array of matching requests. Once returned, these requests are removed from future matching and you are responsible for flushing and verifying them. ``` // get all pending requests that match the given URL const requests = httpTestingController.match(testUrl); expect(requests.length).toEqual(3); // Respond to each request with different results requests[0].flush([]); requests[1].flush([testData[0]]); requests[2].flush(testData); ``` ### Testing for errors You should test the app's defenses against HTTP requests that fail. Call `request.flush()` with an error message, as seen in the following example. ``` it('can test for 404 error', () => { const emsg = 'deliberate 404 error'; httpClient.get<Data[]>(testUrl).subscribe({ next: () => fail('should have failed with the 404 error'), error: (error: HttpErrorResponse) => { expect(error.status).withContext('status').toEqual(404); expect(error.error).withContext('message').toEqual(emsg); }, }); const req = httpTestingController.expectOne(testUrl); // Respond with mock error req.flush(emsg, { status: 404, statusText: 'Not Found' }); }); ``` Alternatively, call `request.error()` with a `ProgressEvent`. ``` it('can test for network error', done => { // Create mock ProgressEvent with type `error`, raised when something goes wrong // at network level. e.g. Connection timeout, DNS error, offline, etc. const mockError = new ProgressEvent('error'); httpClient.get<Data[]>(testUrl).subscribe({ next: () => fail('should have failed with the network error'), error: (error: HttpErrorResponse) => { expect(error.error).toBe(mockError); done(); }, }); const req = httpTestingController.expectOne(testUrl); // Respond with mock error req.error(mockError); }); ``` Passing metadata to interceptors -------------------------------- Many interceptors require or benefit from configuration. Consider an interceptor that retries failed requests. By default, the interceptor might retry a request three times, but you might want to override this retry count for particularly error-prone or sensitive requests. `[HttpClient](../api/common/http/httpclient)` requests contain a *context* that can carry metadata about the request. This context is available for interceptors to read or modify, though it is not transmitted to the backend server when the request is sent. This lets applications or other interceptors tag requests with configuration parameters, such as how many times to retry a request. ### Creating a context token Angular stores and retrieves a value in the context using an `[HttpContextToken](../api/common/http/httpcontexttoken)`. You can create a context token using the `new` operator, as in the following example: ``` export const RETRY_COUNT = new HttpContextToken(() => 3); ``` The lambda function `() => 3` passed during the creation of the `[HttpContextToken](../api/common/http/httpcontexttoken)` serves two purposes: 1. It lets TypeScript infer the type of this token: `[HttpContextToken](../api/common/http/httpcontexttoken)<number>` The request context is type-safe —reading a token from a request's context returns a value of the appropriate type. 2. It sets the default value for the token. This is the value that the request context returns if no other value was set for this token. Using a default value avoids the need to check if a particular value is set. ### Setting context values when making a request When making a request, you can provide an `[HttpContext](../api/common/http/httpcontext)` instance, in which you have already set the context values. ``` this.httpClient .get('/data/feed', { context: new HttpContext().set(RETRY_COUNT, 5), }) .subscribe(results => {/* ... */}); ``` ### Reading context values in an interceptor Within an interceptor, you can read the value of a token in a given request's context with `[HttpContext.get()](../api/common/http/httpcontext#get)`. If you have not explicitly set a value for the token, Angular returns the default value specified in the token. ``` import {retry} from 'rxjs'; export class RetryInterceptor implements HttpInterceptor { intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { const retryCount = req.context.get(RETRY_COUNT); return next.handle(req).pipe( // Retry the request a configurable number of times. retry(retryCount), ); } } ``` ### Contexts are mutable Unlike most other aspects of `[HttpRequest](../api/common/http/httprequest)` instances, the request context is mutable and persists across other immutable transformations of the request. This lets interceptors coordinate operations through the context. For instance, the `RetryInterceptor` example could use a second context token to track how many errors occur during the execution of a given request: ``` import {retry, tap} from 'rxjs/operators'; export const RETRY_COUNT = new HttpContextToken(() => 3); export const ERROR_COUNT = new HttpContextToken(() => 0); export class RetryInterceptor implements HttpInterceptor { intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { const retryCount = req.context.get(RETRY_COUNT); return next.handle(req).pipe( tap({ // An error has occurred, so increment this request's ERROR_COUNT. error: () => req.context.set(ERROR_COUNT, req.context.get(ERROR_COUNT) + 1) }), // Retry the request a configurable number of times. retry(retryCount), ); } } ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular Angular documentation style guide Angular documentation style guide ================================= This style guide covers the standards for writing [Angular documentation on angular.io](docs). These standards ensure consistency in writing style, Markdown conventions, and code snippets. Prerequisites ------------- Before contributing to the Angular documentation, it is helpful if you are familiar with the following: | Subjects | Details | | --- | --- | | `git` | For an introduction, see GitHub's [Git Handbook](https://guides.github.com/introduction/git-handbook) | | GitHub | For an introduction, see GitHub's [Hello World](https://guides.github.com/activities/hello-world) | | Markdown | See GitHub's [Mastering Markdown](https://guides.github.com/features/mastering-markdown) | | Angular coding style | See the [Angular Style Guide](styleguide "Angular Application Code Style Guide") | | Google writing style | The [Google Developer Documentation Style Guide](https://developers.google.com/style) is a comprehensive resource that this Angular documentation style guide builds upon | Kinds of Angular documentation ------------------------------ The categories of Angular documentation include: | Angular documentation categories | Details | | --- | --- | | [Guides](docs) | Much of what's in the [documentation section of angular.io](docs). Guides walk the reader step-by-step through tasks to demonstrate concepts and are often accompanied by a working example. These include [Getting Started](start), [Tour of Heroes](../tutorial/tour-of-heroes), and pages about [Forms](forms-overview), [Dependency Injection](dependency-injection), and [HttpClient](http). Contributing members of the community and Angular team members maintain this documentation in [Markdown](https://daringfireball.net/projects/markdown/syntax "Markdown"). | | [API documentation](api) | Reference documents for the [Angular Application Programming Interface, or API](api). These are more succinct than guides and serve as a reference for Angular features. They are especially helpful for people already acquainted with Angular concepts. The [angular.io](https://angular.io) infrastructure generates these documents from source code and comments that contributors edit. | | [CLI documentation](cli) | The [angular.io](https://angular.io) infrastructure generates these documents from CLI source code. | Markdown and HTML ----------------- While the Angular guides are [Markdown](https://daringfireball.net/projects/markdown/syntax "Markdown") files, there are some sections within the guides that use HTML. > To enable HTML in an Angular guide, **always** follow every opening and closing HTML tag with a blank line. > > Notice the required blank line after the opening `<div>` in the following example: ``` <div class="alert is-helpful"> **Always** follow every opening and closing HTML tag with *a blank line*. </div> ``` It is customary but not required to precede the closing HTML tag with a blank line as well. Title ----- Every guide document must have a title, and it should appear at the top of the page. Begin the title with the Markdown hash (`#`) character, which renders as an `<h1>` in the browser. ``` # Angular documentation style guide ``` | Title guidance | Details | | --- | --- | | A document can have only one `<h1>` | Title text should be in *Sentence case*, which means the first word is capitalized and all other words are lower case. Technical terms that are always capitalized, like "Angular", are the exception. ``` # Deprecation policy in Angular ``` | | Always follow the title with at least one blank line | The corresponding text in the left nav is in *Title Case*, which means that you use capital letters to start the first words and all principal words. Use lower case letters for secondary words such as "in", "of", and "the". You can also shorten the nav title to fit in the column. | Sections -------- A typical document has sections. All section headings are in *Sentence case*, which means the first word is capitalized and all other words are lower case. **Always follow a section heading with at least one blank line.** ### Main section heading There are usually one or more main sections that may be further divided into secondary sections. Begin a main section heading with the Markdown `##` characters, which renders as an `<h2>` in the browser. Follow main section headings with a blank line and then the content for that heading as in the following example: ``` ## Main section heading Content after a blank line. ``` ### Secondary section heading A secondary section heading is related to a main heading and falls textually within the bounds of that main heading. Begin a secondary heading with the Markdown `###` characters, which renders as an `<h3>` in the browser. Follow a secondary heading by a blank line and then the content for that heading as in the following example: ``` ### Secondary section heading Content after a blank line. ``` ### Additional section headings While you can use additional section headings, the [Table-of-contents (TOC)](docs-style-guide#table-of-contents) generator only shows `<h2>` and `<h3>` headings in the TOC on the right of the page. ``` #### The TOC won't display this Content after a blank line. ``` Table of contents ----------------- Most pages display a table of contents or TOC. The TOC appears in the right panel when the viewport is wide. When narrow, the TOC appears in a collapsible region near the top of the page. You don't need to create your own TOC by hand because the TOC generator creates one automatically from the page's `<h2>` and `<h3>` headers. To exclude a heading from the TOC, create the heading as an `<h2>` or `<h3>` element with a class called 'no-toc'. ``` <h3 class="no-toc"> This heading is not displayed in the TOC </h3> ``` You can turn off TOC generation for the entire page by writing the title with an `<h1>` tag and the `no-toc` class. ``` <h1 class="no-toc"> A guide without a TOC </h1> ``` Navigation ---------- To generate the navigation links at the top, left, and bottom of the screen, use the JSON configuration file, `content/navigation.json`. > If you have an idea that would result in navigation changes, [file an issue](https://github.com/angular/angular/issues/new/choose) first so that the Angular team and community can discuss the change. > > For a new guide page, edit the `SideNav` node in `navigation.json`. The `SideNav` node is an array of navigation nodes. Each node is either an item node for a single document or a header node with child nodes. Find the header for your page. For example, a guide page that describes an Angular feature is probably a child of the `Fundamentals` header. ``` { "title": "Fundamentals", "tooltip": "The fundamentals of Angular", "children": [ … ] } ``` A header node child can be an item node or another header node. If your guide page belongs under a sub-header, find that sub-header in the JSON. Add an item node for your guide page as a child of the appropriate header node as in the following example: ``` { "url": "guide/docs-style-guide", "title": "Doc authors style guide", "tooltip": "Style guide for documentation authors.", }, ``` A navigation node has the following properties: | Properties | Details | | --- | --- | | `url` | The URL of the guide page, which is an item node only. | | `title` | The text displayed in the side nav. | | `tooltip` | Text that appears when the reader hovers over the navigation link. | | `children` | An array of child nodes, which is a header node only. | | `hidden` | Defined and set `true` if this is a guide page that should not be displayed in the navigation panel. | > Do not create a node that is both a header and an item node by specifying the `url` property of a header node. > > Code snippets ------------- [Angular.io](docs) has a custom framework that enables authors to include code snippets directly from working example applications that are automatically tested as part of documentation builds. In addition to working code snippets, example code can include terminal commands, a fragment of TypeScript or HTML, or an entire code file. Whatever the source, the document viewer renders them as code snippets, either individually with the [code-example](docs-style-guide#code-example "code-example") component or as a tabbed collection with the [code-tabs](docs-style-guide#code-tabs "code-tabs") component. ### When to use code font You can display a minimal, inline code snippet with the Markdown backtick syntax. Use a single backtick on either side of a term when referring to code or the name of a file in a sentence. The following are some examples: * In the `app.component.ts`, add a `logger()` method. * The `name` property is `Sally`. * Add the component class name to the `declarations` array. The Markdown is as follows: ``` * In the `app.component.ts`, add a `logger()` method. * The <code class="no-auto-link">item</code> property is `true`. * Add the component class name to the `declarations` array. ``` ### Auto-linking in code snippets In certain cases, when you apply backticks around a term, it may auto-link to the API documentation. If you do not intend the term to be a link, use the following syntax: ``` The <code class="no-auto-link">item</code> property is `true`. ``` ### Hard-coded snippets Ideally, you should source code snippets [from working sample code](docs-style-guide#from-code-samples), though there are times when an inline snippet is necessary. For terminal input and output, place the content between `<code-example>` tags and set the language attribute to `sh` as in this example: ``` npm start ``` ``` <code-example language="shell"> npm start </code-example> ``` Inline, hard-coded snippets like this one are not testable and, therefore, intrinsically unreliable. This example belongs to the small set of pre-approved, inline snippets that includes user input in a command shell or the output of some process. In all other cases, code snippets should be generated automatically from tested code samples. For hypothetical examples such as illustrations of configuration options in a JSON file, use the `<code-example>` tag with the `header` attribute to identify the context. ### Compilable example apps One of the Angular documentation design goals is that guide page code snippets be examples of working code. Authors meet this goal by displaying code snippets directly from working sample applications, written specifically for these guide pages. Find sample applications in sub-folders of the `content/examples` directory of the `angular/angular` repository. An example folder name is often the same as the guide page it supports. > A guide page might not have its own sample code. It might refer instead to a sample belonging to another page. > > The Angular CI process runs all end-to-end tests for every Angular PR. Angular re-tests the samples after every new version of a sample and every new version of Angular. When possible, every snippet of code on a guide page should be derived from a code sample file. You tell the Angular documentation engine which code file —or fragment of a code file— to display by configuring `<code-example>` attributes. ### Displaying an entire code file This Angular documentation style guide that you are currently reading has its own example application, located in the `content/examples/docs-style-guide` folder. The following `<code-example>` displays the sample's `app.module.ts`: ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; @NgModule({ imports: [ BrowserModule, FormsModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` The following markup produces that snippet: ``` <code-example path="docs-style-guide/src/app/app.module.ts" header="src/app/app.module.ts"></code-example> ``` The `path` attribute identifies the snippet's source file at the example application folder's location within `content/examples`. In this example, that path is `docs-style-guide/src/app/app.module.ts`. The header tells the reader where to find the file. Following convention, set the `header` attribute to the file's location within the example application's root folder. Unless otherwise commented, all code snippets in this page are from sample source code located in the `content/examples/docs-style-guide` directory. ### Displaying part of a code file To include a snippet of code within a sample code file, rather than the entire file, use the `<code-example>` `region` attribute. The following example focuses on the `AppModule` class and its `@[NgModule](../api/core/ngmodule)()` metadata: ``` @NgModule({ imports: [ BrowserModule, FormsModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` To render the above example, the HTML in the Markdown file is as follows: ``` <code-example path="docs-style-guide/src/app/app.module.ts" header="src/app/app.module.ts" region="class"></code-example> ``` The `path` points to the file, just as in examples that render the [entire file](docs-style-guide#display-whole-file). The `region` attribute specifies a portion of the source file delineated by an opening `#docregion` and a closing `#enddocregion`. You can see the `class` `#docregion` in the source file below. Notice the commented lines `#docregion` and `#enddocregion` in `content/examples/docs-style-guide/src/app/app.module.ts` with the name `class`. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; // #docregion class @NgModule({ imports: [ BrowserModule, FormsModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } // #enddocregion class ``` The opening and ending `#docregion` lines designate any lines of code between them as being included in the code snippet. This is why the import statements outside of the `class` `#docregion` are not in the code snippet. For more information on how to prepare example application files for use in guides, see [Source code markup](docs-style-guide#source-code-markup). ### Code snippet options Specify the `<code-example>` output with the following attributes: | Attributes | Details | | --- | --- | | `path` | The path to the file in the `content/examples` folder. | | `header` | The header of the code listing. This is the title of the code snippet and can include the path and extra information such as whether the snippet is an excerpt. | | `region` | Displays the source file fragment with that region name; regions are identified by `#docregion` markup in the source file. See [Displaying a code snippet](docs-style-guide#region "Displaying a code snippet"). | | `linenums` | Value may be `true`, `false`, or a `number`. The default is `false`, which means that the browser displays no line numbers. The `number` option starts line numbering at the given value. For example, `linenums=4` sets the starting line number to 4. | | `class` | Code snippets can be styled with the CSS classes `no-box` and `avoid`. | | `hideCopy` | Hides the copy button. | | `language` | The source code language such as `javascript`, `html`, `css`, `typescript`, `json`, or `shell`. This attribute only applies to hard-coded examples. | ### Displaying bad code Occasionally, you want to display an example of less than ideal code or design, such as with **avoid** examples in the [Angular Style Guide](styleguide). Because it is possible for readers to copy and paste examples of inferior code in their own applications, try to minimize use of such code. In cases where you need unacceptable examples, you can set the `class` to `avoid` or have the word `avoid` in the filename of the source file. By putting the word `avoid` in the filename or path, the documentation generator automatically adds the `avoid` class to the `<code-example>`. Either of these options frames the code snippet in bright red to grab the reader's attention. Here's the markup for an "avoid" example in the [Angular Style Guide](styleguide#style-05-03 "Style 05-03: components as elements") that uses the word `avoid` in the path name: ``` <code-example header="app/heroes/hero-button/hero-button.component.ts" path="styleguide/src/05-03/app/heroes/shared/hero-button/hero-button.component.avoid.ts" region="example"></code-example> ``` Having the word "avoid" in the file name causes the browser to render the code snippet with a red header and border: ``` /* avoid */ @Component({ selector: '[tohHeroButton]', templateUrl: './hero-button.component.html' }) export class HeroButtonComponent {} ``` Alternatively, the HTML could include the `avoid` class as in the following: ``` <code-example class="avoid" header="docs-style-guide/src/app/not-great.component.ts" path="docs-style-guide/src/app/not-great.component.ts" region="not-great"></code-example> ``` Explicitly applying the class `avoid` causes the same result of a red header and red border: ``` export class NotGreatComponent { buggyFunction() { // things could be better here... } } ``` ### Code Tabs Code tabs display code much like `code-examples` with the added advantage of displaying multiple code samples within a tabbed interface. Each tab displays code using a `code-pane`. #### `code-tabs` attributes * `linenums`: The value can be `true`, `false`, or a number indicating the starting line number. The default is `false`. #### `code-pane` attributes | Attributes | Details | | --- | --- | | `path` | A file in the `content/examples` folder | | `header` | What displays in the header of a tab | | `linenums` | Overrides the `linenums` property at the `code-tabs` level for this particular pane. The value can be `true`, `false`, or a number indicating the starting line number. The default is `false`. | The following example displays multiple code tabs, each with its own header. It demonstrates showing line numbers in `<code-tabs>` and `<code-pane>`. ``` <h1>{{title}}</h1> <h2>My Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes"> <button [class.selected]="hero === selectedHero" type="button" (click)="onSelect(hero)"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> </ul> <div *ngIf="selectedHero"> <h2>{{selectedHero.name}} details!</h2> <div>id: {{selectedHero.id}}</div> <div> <label> name: <input [(ngModel)]="selectedHero.name" placeholder="name"/> </label> </div> </div> ``` ``` import { Component } from '@angular/core'; import { Hero, HEROES } from './hero'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Authors Style Guide Sample'; heroes = HEROES; selectedHero!: Hero; onSelect(hero: Hero): void { this.selectedHero = hero; } } ``` ``` .heroes { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } ``` ``` <div id="search-component"> <label for="search-box">Hero Search</label> <input #searchBox id="search-box" (input)="search(searchBox.value)" /> <ul class="search-result"> <li *ngFor="let hero of heroes$ | async" > <a routerLink="/detail/{{hero.id}}"> {{hero.name}} </a> </li> </ul> </div> ``` The `linenums` attribute set to `true` on `<code-tabs>` explicitly enables numbering for all panes. However, the `linenums` attribute set to `false` in the second `<code-pane>` disables line numbering only for itself. ``` <code-tabs linenums="true"> <code-pane header="app.component.html" path="docs-style-guide/src/app/app.component.html"> </code-pane> <code-pane header="app.component.ts" path="docs-style-guide/src/app/app.component.ts" linenums="false"> </code-pane> <code-pane header="app.component.css (heroes)" path="docs-style-guide/src/app/app.component.css" region="heroes"> </code-pane> <code-pane header="package.json (scripts)" path="docs-style-guide/package.1.json"> </code-pane> </code-tabs> ``` Preparing source code for code snippets --------------------------------------- To display `<code-example>` and `<code-tabs>` snippets, add code snippet markup to sample source code files. > The sample source code for this page, located in `content/examples/docs-style-guide`, contains examples of every code snippet markup described in this section. > > Code snippet markup is always in the form of a comment. The default `#docregion` markup for a TypeScript or JavaScript file is as follows: ``` // #docregion … some TypeScript or JavaScript code … // #enddocregion ``` ``` <!-- #docregion --> … some HTML … <!-- #enddocregion --> ``` ``` /* #docregion */ … some CSS … /* #enddocregion */ ``` The documentation generation process erases these comments before displaying them in the documentation viewer, StackBlitz, and sample code downloads. > Because JSON does not allow comments, code snippet markup doesn't work in JSON files. See the section on [JSON files](docs-style-guide#json-files) for more information. > > ### `#docregion` Use `#docregion` in source files to mark code for use in `<code-example>` or `<code-tabs>` components. The `#docregion` comment begins a code snippet region. Every line of code after that comment belongs in the region until the code fragment processor encounters the end of the file or a closing `#enddocregion`. The following `src/main.ts` is an example of a file with a single `#docregion` at the top of the file. ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule) .catch(err => console.error(err)); ``` As a result, the entire file is in the `<code-example>`. ### Naming a `#docregion` To display multiple snippets from different fragments within the same file, give each fragment its own `#docregion` name as follows, where `your-region-name` is a hyphenated lowercase string: ``` // #docregion your-region-name … some code … // #enddocregion your-region-name ``` Reference this region by name in the `region` attribute of the `<code-example>` or `<code-pane>` as follows: ``` <code-example path="your-example-app/src/app/your-file.ts" region="your-region-name"></code-example> ``` Because the `#docregion` with no name is the default region, you do not need to set the `region` attribute when referring to the default `#docregion`. ### Nesting a `#docregion` Place a `#docregion` within another `#docregion` as in the following example with a nested `inner-region`: ``` // #docregion … some code … // #docregion inner-region … more code … // #enddocregion inner-region … yet more code … /// #enddocregion ``` ### Combining code fragments Combine several fragments from the same file into a single code snippet by defining multiple `#docregion` sections with the same region name. The following example defines two nested `#docregion` sections. The inner region, `class-skeleton`, appears twice —once to capture the code that opens the class definition and a second time to capture the code that closes the class definition. ``` // #docplaster … // #docregion class, class-skeleton export class AppComponent { // #enddocregion class-skeleton title = 'Authors Style Guide Sample'; heroes = HEROES; selectedHero: Hero; onSelect(hero: Hero): void { this.selectedHero = hero; } // #docregion class-skeleton } // #enddocregion class, class-skeleton ``` The `#docplaster` marker tells the processor what text string to use —that is, the "plaster"— to join each of the fragments into a single snippet. Place the "plaster" text on the same line. For example, `#docplaster ---` would use `---` as the "plaster" text. In the case of the previous file, the "plaster" text is empty so there will be nothing in between each fragment. Without `#docplaster`, the processor inserts the default plaster —an ellipsis comment— between the fragments. Here are the two corresponding code snippets for side-by-side comparison. ``` export class AppComponent { title = 'Authors Style Guide Sample'; heroes = HEROES; selectedHero!: Hero; onSelect(hero: Hero): void { this.selectedHero = hero; } } ``` ``` export class AppComponent { } ``` The above example also demonstrates that one `#docregion` or `#enddocregion` comment can specify two region names, which is a convenient way to start or stop multiple regions on the same code line. Alternatively, you could put these comments on separate lines as in the following example: ``` // #docplaster … // #docregion class // #docregion class-skeleton export class AppComponent { // #enddocregion class-skeleton title = 'Authors Style Guide Sample'; heroes = HEROES; selectedHero: Hero; onSelect(hero: Hero): void { this.selectedHero = hero; } // #docregion class-skeleton } // #enddocregion class // #enddocregion class-skeleton ``` ### JSON files The `<code-example>` component cannot display portions of a JSON file because JSON forbids comments. However, you can display an entire JSON file by referencing it in the `<code-example>` `src` attribute. For large JSON files, you could copy the nodes-of-interest into Markdown backticks, but as it's easy to mistakenly create invalid JSON that way, consider creating a JSON partial file with the fragment you want to display. You can't test a partial file nor use it in the application, but at least your editor can confirm that it is syntactically correct. You can also store the partial file next to the original, so it is more likely that the author will remember to keep the two in sync. Here's an example that excerpts certain scripts from `package.json` into a partial file named `package.1.json`. ``` { "scripts": { "start": "concurrently \"npm run build:watch\" \"npm run serve\"", "test": "concurrently \"npm run build:watch\" \"karma start karma.conf.js\"", "lint": "tslint ./src/**/*.ts -t verbose" } } ``` ``` <code-example header="package.json (selected scripts)" path="docs-style-guide/package.1.json"></code-example> ``` In some cases, it is preferable to use the name of the full file rather than the partial. In this case, the full file is `package.json` and the partial file is `package.1.json`. Since the focus is generally on the full file rather than the partial, using the name of the file the reader edits, in this example `package.json`, clarifies which file to work in. ### Partial file naming The step-by-step nature of the guides necessitate refactoring, which means there are code snippets that evolve through a guide. Use partial files to demonstrate intermediate versions of the final source code with fragments of code that don't appear in the final app. The sample naming convention adds a number before the file extension, as follows: ``` package.1.json app.component.1.ts app.component.2.ts ``` Remember to exclude these files from StackBlitz by listing them in the `stackblitz.json` as illustrated here: ``` { "description": "Authors style guide", "files": [ "!**/*.d.ts", "!**/*.js", "!**/*.[1,2,3].*" ], "tags": ["author", "style guide"] } ``` ### Source code styling Source code should follow [Angular's style guide](styleguide) where possible. #### Hexadecimals Hexadecimal should use the shorthand where possible, and use only lowercase letters. Live examples ------------- Adding `<live-example></live-example>` to a page generates two default links: live example. The first is a link to the StackBlitz example, which the default `stackblitz.json` file defines. You can find the `stackblitz.json` file in the `content/examples/example-app` directory, where `example-app` is the sample application folder you're using for the guide. By default, the documentation generator uses the name of the guide as the name of the example. So, if you're working on `router.md`, and use `<live-example></live-example>` in the document, the documentation generator looks for `content/examples/router`. Clicking this link opens the code sample on StackBlitz in a new browser tab. The second link downloads the sample app. Define live examples by one or more `stackblitz.json` files in the root of a code sample folder. Each sample folder usually has a single unnamed definition file, the default `stackblitz.json`. ### Live Example for named StackBlitz You can create additional, named definition files in the form `name.stackblitz.json`. The [Testing](testing) guide (`aio/content/guide/testing.md`) references a named StackBlitz file as follows: ``` <live-example stackblitz="specs">Tests</live-example> ``` The `stackblitz` attribute value of `specs` refers to the `examples/testing/specs.stackblitz.json` file. If you were to leave out the `stackblitz` attribute, the default would be `examples/testing/stackblitz.json`. ### Custom label and tooltip Change the appearance and behavior of the live example with attributes and classes. The following example gives the live example anchor a custom label and tooltip by setting the `title` attribute: ``` <live-example title="Live Example with title"></live-example> ``` The browser renders the following: You can achieve the same effect by putting the label between the `<live-example>` tags: ``` <live-example>Live example with content label</live-example> ``` The browser renders the following: Live example with content label ### Live example from another guide To link to an example in a folder where the name is not the same as the current guide page, set the `name` attribute to the name of that folder. For example, to include the [Router](router) guide example in this style guide, set the `name` attribute to `router`, that is, the name of the folder where that example resides. ``` <live-example name="router">Live example from the Router guide</live-example> ``` Live example from the Router guide ### Live Example without download To omit the download link, add the `noDownload` attribute. ``` <live-example noDownload>Just the StackBlitz</live-example> ``` The browser renders the following: Just the StackBlitz ### Live Example with download-only To omit the live StackBlitz link and only link to the download, add the `downloadOnly` attribute. ``` <live-example downloadOnly>Download only</live-example> ``` The browser renders the following: Download only ### Embedded live example By default, a live example link opens a StackBlitz example in a separate browser tab. You can embed the StackBlitz example within the guide page by adding the `embedded` attribute. For performance reasons, StackBlitz does not start right away. Instead, the `<live-example>` component renders an image. Clicking the image starts the process of launching the embedded StackBlitz within an `<iframe>`. The following is an embedded `<live-example>` for this guide: ``` <live-example embedded></live-example> ``` The browser renders the following `<iframe>` and a `<p>` with a link to download the example: Anchors ------- Every section header is also an anchor point. Another guide page could add a link to this "Anchors" section with the following: ``` <div class="alert is-helpful"> See the ["Anchors"](guide/docs-style-guide#anchors "Style Guide —Anchors") section for details. </div> ``` The browser renders the following: > See the ["Anchors"](docs-style-guide#anchors "Style Guide —Anchors") section for details. > > Notice that the above example includes a title of "Style Guide —Anchors". Use titles on anchors to create tooltips and improve UX. When navigating within a page, you can omit the page URL when specifying the link that [scrolls up](docs-style-guide#anchors "Anchors") to the beginning of this section, as in the following: ``` … the link that [scrolls up](#anchors "Anchors") to … ``` ### Section header anchors While the documentation generator automatically creates anchors for headers based on the header wording, titles can change, which can potentially break any links to that section. To mitigate link breakage, add a custom anchor explicitly, just above the heading or text to which it applies, using the special `<a id="name"></a>` syntax as follows: ``` #### Section header anchors ``` Then reference that anchor like this: ``` This is a [link to that custom anchor name](#section-anchors). ``` The browser renders the following: This is a [link to that custom anchor name](docs-style-guide#section-anchors). When editing a file, don't remove any anchors. If you change the document structure, you can move an existing anchor within that same document without breaking a link. You can also add more anchors with more appropriate text. > As an alternative, you can use the HTML `<a>` tag. When using the `<a>` element, set the `id` attribute —rather than the `name` attribute because the documentation generator does not convert the `name` to the proper link URL. For example: > > > ``` > <a id="anchors"></a> > > ## Anchors > ``` > Alerts and callouts ------------------- Alerts and callouts present warnings, extra detail, or references to related topics. An alert or callout should not contain anything essential to understanding the main content. Instructions or tutorial steps should be in the main body of a guide rather than in a subsection. ### Alerts Alerts draw attention to short, important points. For multi-line content, see [callouts](docs-style-guide#callouts "callouts"). > See the [live examples](docs-style-guide#live-examples "Live examples") section for more information. > > > **NOTE**: At least one blank line must follow both the opening and closing `<div>` tags. A blank line before the closing `</div>` is conventional but not required. > > ``` <div class="alert is-helpful"> See the [live examples](guide/docs-style-guide#live-examples "Live examples") section for more information. </div> ``` There are three different levels for styling the alerts according to the importance of the content. Use the following the `alert` class along with the appropriate `is-helpful`, `is-important`, or `is-critical` CSS class, as follows: ``` <div class="alert is-helpful"> A helpful, informational alert. </div> ``` ``` <div class="alert is-important"> An important alert. </div> ``` ``` <div class="alert is-critical"> A critical alert. </div> ``` The browser renders the following: > A helpful, informational alert. > > > An important alert. > > > A critical alert. > > ### Callouts Callouts, like alerts, highlight important points. Use a callout when you need a header and multi-line content. If you have more than two paragraphs, consider creating a new page or making it part of the main content. Callouts use the same styling levels as alerts. Use the CSS class `callout` in conjunction with the appropriate `is-helpful`, `is-important`, or `is-critical` class. The following example uses the `is-helpful` class: ``` <div class="callout is-helpful"> <header>A helpful or informational point</header> **A helpful note**. Use a helpful callout for information requiring explanation. Callouts are typically multi-line notes. They can also contain code snippets. </div> ``` The browser renders the three styles as follows: **A helpful note**. Use a helpful callout for information requiring explanation. Callouts are typically multi-line notes. They can also contain code snippets. **An important note**. Use an important callout for significant information requiring explanation. Callouts are typically multi-line notes. They can also contain code snippets. **A critical note**. Use a critical callout for compelling information requiring explanation. Callouts are typically multi-line notes. They can also contain code snippets. When using callouts, consider the following points: * The callout header text style is uppercase * The header does not render in the table of contents * You can write the callout body in Markdown * A blank line separates the `<header>` tag from the Markdown content * Avoid using an `<h2>`, `<h3>`, `<h4>`, `<h5>`, or `<h6>`, as the CSS for callouts styles the `<header>` element Use callouts sparingly to grab the user's attention. Trees ----- Use trees to represent hierarchical data such as directory structure. ``` sample-dir src app app.component.ts app.module.ts styles.css tsconfig.json node_modules … package.json ``` Here is the markup for this file tree. ``` <div class="filetree"> <div class="file"> sample-dir </div> <div class="children"> <div class="file"> src </div> <div class="children"> <div class="file"> app </div> <div class="children"> <div class="file"> app.component.ts </div> <div class="file"> app.module.ts </div> </div> <div class="file"> styles.css </div> <div class="file"> tsconfig.json </div> </div> <div class="file"> node_modules … </div> <div class="file"> package.json </div> </div> </div> ``` Images ------ Store images in the `content/images/guide` directory in a folder with the **same name** as the guide page. Because Angular documentation generation copies these images to `generated/images/guide/your-guide-directory`, set the image `src` attribute to the runtime location of `generated/images/guide/your-guide-directory`. For example, images for this "Angular documentation style guide" are in the `content/images/guide/docs-style-guide` folder, but the `src` attribute specifies the `generated` location. The following is the `src` attribute for the "flying hero" image belonging to this guide: ``` src="generated/images/guide/docs-style-guide/flying-hero.png" ``` Specify images using the `<[img](../api/common/ngoptimizedimage)>` tag. **Do not use the Markdown image syntax, `![... ](... )`.** For accessibility, always set the `alt` attribute with a meaningful description of the image. Nest the `<[img](../api/common/ngoptimizedimage)>` tag within a `<div class="lightbox">` tag, which styles the image according to the documentation standard. ``` <div class="lightbox"> <img alt="flying hero" src="generated/images/guide/docs-style-guide/flying-hero.png"> </div> ``` > **NOTE**: The HTML `<[img](../api/common/ngoptimizedimage)>` element does not have a closing tag. > > The browser renders the following: ### Image captions and figure captions A caption appears underneath the image as a concise and comprehensive summary of the image. Captions are optional unless you are using numbered figures, such as Figure 1, Figure 2, and so on. If you are using numbered figures in a page, follow the guidelines in [Figure captions](https://developers.google.com/style/images#figure-captions) in the Google Developer Documentation Style Guide. ### Image dimensions The doc generator reads the image dimensions from an image file and adds `width` and `height` attributes to the `<[img](../api/common/ngoptimizedimage)>` tag automatically. To control the size of the image, supply your own `width` and `height` attributes. Here's the "flying hero" markup with a 200px width: ``` <div class="lightbox"> <img alt="flying Angular hero" src="generated/images/guide/docs-style-guide/flying-hero.png" width="200"> </div> ``` The browser renders the following: ### Wide images To prevent images overflowing their viewports, **use image widths under 700px**. To display a larger image, provide a link to the actual image that the user can click to see the full size image separately, as in this example of `source-map-explorer` output from the "Ahead-of-time Compilation" guide: The following is the HTML for creating a link to the image: ``` <a href="generated/images/guide/docs-style-guide/toh-pt6-bundle.png" title="Click to view larger image"> <div class="lightbox"> <img alt="toh-pt6-bundle" src="generated/images/guide/docs-style-guide/toh-pt6-bundle-700w.png" width="300px"> </div> </a> ``` ### Image compression For faster load times, always compress images. Consider using an image compression web site such as [tinypng](https://tinypng.com/ "tinypng"). ### Floated images You can float the image to the left or right of text by applying the `class="left"` or `class="right"` attributes respectively. For example: ``` <img alt="flying Angular hero" class="left" src="generated/images/guide/docs-style-guide/flying-hero.png" width="200"> This text wraps around to the right of the floating "flying hero" image. Headings and code-examples automatically clear a floated image. If you need to force a piece of text to clear a floating image, add `<br class="clear">` where the text should break. <br class="clear"> ``` The browser renders the following: This text wraps around to the right of the floating "flying hero" image. Headings and `<code-example>` components automatically clear a floated image. To explicitly clear a floated image, add `<br class="clear">` where the text should break. Generally, you don't wrap a floated image in a `<figure>` element. ### Floats within a subsection If you have a floated image inside an alert, callout, or a subsection, apply the `clear-fix` class to the `<div>` to ensure that the image doesn't overflow its container. For example: ``` <div class="alert is-helpful clear-fix"> <img alt="flying Angular hero" src="generated/images/guide/docs-style-guide/flying-hero.png" class="right" width="100"> A subsection with **Markdown** formatted text. </div> ``` The browser renders the following: > A subsection with **Markdown** formatted text. > > Help with documentation style ----------------------------- For specific language and grammar usage, a word list, style, tone, and formatting recommendations, see the [Google Developer Documentation Style Guide](https://developers.google.com/style). If you have any questions that this style guide doesn't answer or you would like to discuss documentation styles, see the [Angular repo](https://github.com/angular/angular) and [file a documentation issue](https://github.com/angular/angular/issues/new/choose). @reviewed 2022-02-28
programming_docs
angular Hierarchical injectors Hierarchical injectors ====================== Injectors in Angular have rules that you can leverage to achieve the desired visibility of injectables in your applications. By understanding these rules, you can determine in which NgModule, Component, or Directive you should declare a provider. > **NOTE**: This topic uses the following pictographs. > > > > | html entities | pictographs | > | --- | --- | > | `🌺` | red hibiscus (`🌺`) | > | `🌻` | sunflower (`🌻`) | > | `🌼` | yellow flower (`🌼`) | > | `🌿` | fern (`🌿`) | > | `🍁` | maple leaf (`🍁`) | > | `🐳` | whale (`🐳`) | > | `🐶` | dog (`🐶`) | > | `🦔` | hedgehog (`🦔`) | > > The applications you build with Angular can become quite large, and one way to manage this complexity is to split up the application into many small well-encapsulated modules, that are by themselves split up into a well-defined tree of components. There can be sections of your page that works in a completely independent way than the rest of the application, with its own local copies of the services and other dependencies that it needs. Some of the services that these sections of the application use might be shared with other parts of the application, or with parent components that are further up in the component tree, while other dependencies are meant to be private. With hierarchical dependency injection, you can isolate sections of the application and give them their own private dependencies not shared with the rest of the application, or have parent components share certain dependencies with its child components only but not with the rest of the component tree, and so on. Hierarchical dependency injection enables you to share dependencies between different parts of the application only when and if you need to. Types of injector hierarchies ----------------------------- Injectors in Angular have rules that you can leverage to achieve the desired visibility of injectables in your applications. By understanding these rules, you can determine in which NgModule, Component, or Directive you should declare a provider. Angular has two injector hierarchies: | Injector hierarchies | Details | | --- | --- | | `ModuleInjector` hierarchy | Configure a `ModuleInjector` in this hierarchy using an `@[NgModule](../api/core/ngmodule)()` or `@[Injectable](../api/core/injectable)()` annotation. | | `ElementInjector` hierarchy | Created implicitly at each DOM element. An `ElementInjector` is empty by default unless you configure it in the `providers` property on `@[Directive](../api/core/directive)()` or `@[Component](../api/core/component)()`. | ### `ModuleInjector` The `ModuleInjector` can be configured in one of two ways by using: * The `@[Injectable](../api/core/injectable)()` `providedIn` property to refer to `@[NgModule](../api/core/ngmodule)()`, or `root` * The `@[NgModule](../api/core/ngmodule)()` `providers` array Using the `@[Injectable](../api/core/injectable)()` `providedIn` property is preferable to using the `@[NgModule](../api/core/ngmodule)()` `providers` array. With `@[Injectable](../api/core/injectable)()` `providedIn`, optimization tools can perform tree-shaking, which removes services that your application isn't using. This results in smaller bundle sizes. Tree-shaking is especially useful for a library because the application which uses the library may not have a need to inject it. Read more about [tree-shakable providers](architecture-services#providing-services) in [Introduction to services and dependency injection](architecture-services). `ModuleInjector` is configured by the `@[NgModule.providers](../api/core/ngmodule#providers)` and `[NgModule.imports](../api/core/ngmodule#imports)` property. `ModuleInjector` is a flattening of all the providers arrays that can be reached by following the `[NgModule.imports](../api/core/ngmodule#imports)` recursively. Child `ModuleInjector` hierarchies are created when lazy loading other `@NgModules`. Provide services with the `providedIn` property of `@[Injectable](../api/core/injectable)()` as follows: ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' // <--provides this service in the root ModuleInjector }) export class ItemService { name = 'telephone'; } ``` The `@[Injectable](../api/core/injectable)()` decorator identifies a service class. The `providedIn` property configures a specific `ModuleInjector`, here `root`, which makes the service available in the `root` `ModuleInjector`. #### Platform injector There are two more injectors above `root`, an additional `ModuleInjector` and `NullInjector()`. Consider how Angular bootstraps the application with the following in `main.ts`: ``` platformBrowserDynamic().bootstrapModule(AppModule).then(ref => {…}) ``` The `bootstrapModule()` method creates a child injector of the platform injector which is configured by the `AppModule`. This is the `root` `ModuleInjector`. The `[platformBrowserDynamic](../api/platform-browser-dynamic/platformbrowserdynamic)()` method creates an injector configured by a `PlatformModule`, which contains platform-specific dependencies. This allows multiple applications to share a platform configuration. For example, a browser has only one URL bar, no matter how many applications you have running. You can configure additional platform-specific providers at the platform level by supplying `extraProviders` using the `[platformBrowser](../api/platform-browser/platformbrowser)()` function. The next parent injector in the hierarchy is the `NullInjector()`, which is the top of the tree. If you've gone so far up the tree that you are looking for a service in the `NullInjector()`, you'll get an error unless you've used `@[Optional](../api/core/optional)()` because ultimately, everything ends at the `NullInjector()` and it returns an error or, in the case of `@[Optional](../api/core/optional)()`, `null`. For more information on `@[Optional](../api/core/optional)()`, see the [`@Optional()` section](hierarchical-dependency-injection#optional) of this guide. The following diagram represents the relationship between the `root` `ModuleInjector` and its parent injectors as the previous paragraphs describe. While the name `root` is a special alias, other `ModuleInjector` hierarchies don't have aliases. You have the option to create `ModuleInjector` hierarchies whenever a dynamically loaded component is created, such as with the Router, which will create child `ModuleInjector` hierarchies. All requests forward up to the root injector, whether you configured it with the `bootstrapModule()` method, or registered all providers with `root` in their own services. If you configure an app-wide provider in the `@[NgModule](../api/core/ngmodule)()` of `AppModule`, it overrides one configured for `root` in the `@[Injectable](../api/core/injectable)()` metadata. You can do this to configure a non-default provider of a service that is shared with multiple applications. Here is an example of the case where the component router configuration includes a non-default [location strategy](router#location-strategy) by listing its provider in the `providers` list of the `AppModule`. ``` providers: [ { provide: LocationStrategy, useClass: HashLocationStrategy } ] ``` ### `ElementInjector` Angular creates `ElementInjector` hierarchies implicitly for each DOM element. Providing a service in the `@[Component](../api/core/component)()` decorator using its `providers` or `viewProviders` property configures an `ElementInjector`. For example, the following `TestComponent` configures the `ElementInjector` by providing the service as follows: ``` @Component({ … providers: [{ provide: ItemService, useValue: { name: 'lamp' } }] }) export class TestComponent ``` > **NOTE**: See the [resolution rules](hierarchical-dependency-injection#resolution-rules) section to understand the relationship between the `ModuleInjector` tree and the `ElementInjector` tree. > > When you provide services in a component, that service is available by way of the `ElementInjector` at that component instance. It may also be visible at child component/directives based on visibility rules described in the [resolution rules](hierarchical-dependency-injection#resolution-rules) section. When the component instance is destroyed, so is that service instance. #### `@[Directive](../api/core/directive)()` and `@[Component](../api/core/component)()` A component is a special type of directive, which means that just as `@[Directive](../api/core/directive)()` has a `providers` property, `@[Component](../api/core/component)()` does too. This means that directives as well as components can configure providers, using the `providers` property. When you configure a provider for a component or directive using the `providers` property, that provider belongs to the `ElementInjector` of that component or directive. Components and directives on the same element share an injector. Resolution rules ---------------- When resolving a token for a component/directive, Angular resolves it in two phases: 1. Against its parents in the `ElementInjector` hierarchy. 2. Against its parents in the `ModuleInjector` hierarchy. When a component declares a dependency, Angular tries to satisfy that dependency with its own `ElementInjector`. If the component's injector lacks the provider, it passes the request up to its parent component's `ElementInjector`. The requests keep forwarding up until Angular finds an injector that can handle the request or runs out of ancestor `ElementInjector` hierarchies. If Angular doesn't find the provider in any `ElementInjector` hierarchies, it goes back to the element where the request originated and looks in the `ModuleInjector` hierarchy. If Angular still doesn't find the provider, it throws an error. If you have registered a provider for the same DI token at different levels, the first one Angular encounters is the one it uses to resolve the dependency. If, for example, a provider is registered locally in the component that needs a service, Angular doesn't look for another provider of the same service. Resolution modifiers -------------------- Angular's resolution behavior can be modified with `@[Optional](../api/core/optional)()`, `@[Self](../api/core/self)()`, `@[SkipSelf](../api/core/skipself)()` and `@[Host](../api/core/host)()`. Import each of them from `@angular/core` and use each in the component class constructor when you inject your service. For a working application showcasing the resolution modifiers that this section covers, see the resolution modifiers example. ### Types of modifiers Resolution modifiers fall into three categories: * What to do if Angular doesn't find what you're looking for, that is `@[Optional](../api/core/optional)()` * Where to start looking, that is `@[SkipSelf](../api/core/skipself)()` * Where to stop looking, `@[Host](../api/core/host)()` and `@[Self](../api/core/self)()` By default, Angular always starts at the current `[Injector](../api/core/injector)` and keeps searching all the way up. Modifiers allow you to change the starting, or *self*, location and the ending location. Additionally, you can combine all of the modifiers except `@[Host](../api/core/host)()` and `@[Self](../api/core/self)()` and of course `@[SkipSelf](../api/core/skipself)()` and `@[Self](../api/core/self)()`. ### `@[Optional](../api/core/optional)()` `@[Optional](../api/core/optional)()` allows Angular to consider a service you inject to be optional. This way, if it can't be resolved at runtime, Angular resolves the service as `null`, rather than throwing an error. In the following example, the service, `OptionalService`, isn't provided in the service, `@[NgModule](../api/core/ngmodule)()`, or component class, so it isn't available anywhere in the app. ``` export class OptionalComponent { constructor(@Optional() public optional?: OptionalService) {} } ``` ### `@[Self](../api/core/self)()` Use `@[Self](../api/core/self)()` so that Angular will only look at the `ElementInjector` for the current component or directive. A good use case for `@[Self](../api/core/self)()` is to inject a service but only if it is available on the current host element. To avoid errors in this situation, combine `@[Self](../api/core/self)()` with `@[Optional](../api/core/optional)()`. For example, in the following `SelfComponent`, notice the injected `LeafService` in the constructor. ``` @Component({ selector: 'app-self-no-data', templateUrl: './self-no-data.component.html', styleUrls: ['./self-no-data.component.css'] }) export class SelfNoDataComponent { constructor(@Self() @Optional() public leaf?: LeafService) { } } ``` In this example, there is a parent provider and injecting the service will return the value, however, injecting the service with `@[Self](../api/core/self)()` and `@[Optional](../api/core/optional)()` will return `null` because `@[Self](../api/core/self)()` tells the injector to stop searching in the current host element. Another example shows the component class with a provider for `FlowerService`. In this case, the injector looks no further than the current `ElementInjector` because it finds the `FlowerService` and returns the yellow flower `🌼`. ``` @Component({ selector: 'app-self', templateUrl: './self.component.html', styleUrls: ['./self.component.css'], providers: [{ provide: FlowerService, useValue: { emoji: '🌼' } }] }) export class SelfComponent { constructor(@Self() public flower: FlowerService) {} } ``` ### `@[SkipSelf](../api/core/skipself)()` `@[SkipSelf](../api/core/skipself)()` is the opposite of `@[Self](../api/core/self)()`. With `@[SkipSelf](../api/core/skipself)()`, Angular starts its search for a service in the parent `ElementInjector`, rather than in the current one. So if the parent `ElementInjector` were using the fern `🌿` value for `emoji`, but you had maple leaf `🍁` in the component's `providers` array, Angular would ignore maple leaf `🍁` and use fern `🌿`. To see this in code, assume that the following value for `emoji` is what the parent component were using, as in this service: ``` export class LeafService { emoji = '🌿'; } ``` Imagine that in the child component, you had a different value, maple leaf `🍁` but you wanted to use the parent's value instead. This is when you'd use `@[SkipSelf](../api/core/skipself)()`: ``` @Component({ selector: 'app-skipself', templateUrl: './skipself.component.html', styleUrls: ['./skipself.component.css'], // Angular would ignore this LeafService instance providers: [{ provide: LeafService, useValue: { emoji: '🍁' } }] }) export class SkipselfComponent { // Use @SkipSelf() in the constructor constructor(@SkipSelf() public leaf: LeafService) { } } ``` In this case, the value you'd get for `emoji` would be fern `🌿`, not maple leaf `🍁`. #### `@[SkipSelf](../api/core/skipself)()` with `@[Optional](../api/core/optional)()` Use `@[SkipSelf](../api/core/skipself)()` with `@[Optional](../api/core/optional)()` to prevent an error if the value is `null`. In the following example, the `Person` service is injected in the constructor. `@[SkipSelf](../api/core/skipself)()` tells Angular to skip the current injector and `@[Optional](../api/core/optional)()` will prevent an error should the `Person` service be `null`. ``` class Person { constructor(@Optional() @SkipSelf() parent?: Person) {} } ``` ### `@[Host](../api/core/host)()` `@[Host](../api/core/host)()` lets you designate a component as the last stop in the injector tree when searching for providers. Even if there is a service instance further up the tree, Angular won't continue looking Use `@[Host](../api/core/host)()` as follows: ``` @Component({ selector: 'app-host', templateUrl: './host.component.html', styleUrls: ['./host.component.css'], // provide the service providers: [{ provide: FlowerService, useValue: { emoji: '🌼' } }] }) export class HostComponent { // use @Host() in the constructor when injecting the service constructor(@Host() @Optional() public flower?: FlowerService) { } } ``` Since `HostComponent` has `@[Host](../api/core/host)()` in its constructor, no matter what the parent of `HostComponent` might have as a `flower.emoji` value, the `HostComponent` will use yellow flower `🌼`. Logical structure of the template --------------------------------- When you provide services in the component class, services are visible within the `ElementInjector` tree relative to where and how you provide those services. Understanding the underlying logical structure of the Angular template will give you a foundation for configuring services and in turn control their visibility. Components are used in your templates, as in the following example: ``` <app-root> <app-child></app-child> </app-root> ``` > **NOTE**: Usually, you declare the components and their templates in separate files. For the purposes of understanding how the injection system works, it is useful to look at them from the point of view of a combined logical tree. The term *logical* distinguishes it from the render tree, which is your application's DOM tree. To mark the locations of where the component templates are located, this guide uses the `<#VIEW>` pseudo-element, which doesn't actually exist in the render tree and is present for mental model purposes only. > > The following is an example of how the `<app-root>` and `<app-child>` view trees are combined into a single logical tree: ``` <app-root> <#VIEW> <app-child> <#VIEW> …content goes here… </#VIEW> </app-child> </#VIEW> </app-root> ``` Understanding the idea of the `<#VIEW>` demarcation is especially significant when you configure services in the component class. Providing services in `@[Component](../api/core/component)()` ------------------------------------------------------------- How you provide services using a `@[Component](../api/core/component)()` (or `@[Directive](../api/core/directive)()`) decorator determines their visibility. The following sections demonstrate `providers` and `viewProviders` along with ways to modify service visibility with `@[SkipSelf](../api/core/skipself)()` and `@[Host](../api/core/host)()`. A component class can provide services in two ways: | Arrays | Details | | --- | --- | | With a `providers` array | ``` @Component({   …   providers: [     {provide: FlowerService, useValue: {emoji: '🌺'}}   ] }) ``` | | With a `viewProviders` array | ``` @Component({   …  viewProviders: [     {provide: AnimalService, useValue: {emoji: '🐶'}}   ] }) ``` | To understand how the `providers` and `viewProviders` influence service visibility differently, the following sections build a step-by-step and compare the use of `providers` and `viewProviders` in code and a logical tree. > **NOTE**: In the logical tree, you'll see `@Provide`, `@[Inject](../api/core/inject)`, and `@[NgModule](../api/core/ngmodule)`, which are not real HTML attributes but are here to demonstrate what is going on under the hood. > > > > | Angular service attribute | Details | > | --- | --- | > | > ``` > @Inject(Token)=>Value > ``` > | Demonstrates that if `Token` is injected at this location in the logical tree its value would be `Value`. | > | > ``` > @Provide(Token=Value) > ``` > | Demonstrates that there is a declaration of `Token` provider with value `Value` at this location in the logical tree. | > | > ``` > @NgModule(Token) > ``` > | Demonstrates that a fallback `[NgModule](../api/core/ngmodule)` injector should be used at this location. | > > ### Example app structure The example application has a `FlowerService` provided in `root` with an `emoji` value of red hibiscus `🌺`. ``` @Injectable({ providedIn: 'root' }) export class FlowerService { emoji = '🌺'; } ``` Consider an application with only an `AppComponent` and a `ChildComponent`. The most basic rendered view would look like nested HTML elements such as the following: ``` <app-root> <!-- AppComponent selector --> <app-child> <!-- ChildComponent selector --> </app-child> </app-root> ``` However, behind the scenes, Angular uses a logical view representation as follows when resolving injection requests: ``` <app-root> <!-- AppComponent selector --> <#VIEW> <app-child> <!-- ChildComponent selector --> <#VIEW> </#VIEW> </app-child> </#VIEW> </app-root> ``` The `<#VIEW>` here represents an instance of a template. Notice that each component has its own `<#VIEW>`. Knowledge of this structure can inform how you provide and inject your services, and give you complete control of service visibility. Now, consider that `<app-root>` injects the `FlowerService`: ``` export class AppComponent { constructor(public flower: FlowerService) {} } ``` Add a binding to the `<app-root>` template to visualize the result: ``` <p>Emoji from FlowerService: {{flower.emoji}}</p> ``` The output in the view would be: ``` Emoji from FlowerService: 🌺 ``` In the logical tree, this would be represented as follows: ``` <app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <p>Emoji from FlowerService: {{flower.emoji}} (🌺)</p> <app-child> <#VIEW> </#VIEW> </app-child> </#VIEW> </app-root> ``` When `<app-root>` requests the `FlowerService`, it is the injector's job to resolve the `FlowerService` token. The resolution of the token happens in two phases: 1. The injector determines the starting location in the logical tree and an ending location of the search. The injector begins with the starting location and looks for the token at each level in the logical tree. If the token is found it is returned. 2. If the token is not found, the injector looks for the closest parent `@[NgModule](../api/core/ngmodule)()` to delegate the request to. In the example case, the constraints are: 1. Start with `<#VIEW>` belonging to `<app-root>` and end with `<app-root>`. * Normally the starting point for search is at the point of injection. However, in this case `<app-root>` `@[Component](../api/core/component)`s are special in that they also include their own `viewProviders`, which is why the search starts at `<#VIEW>` belonging to `<app-root>`. This would not be the case for a directive matched at the same location. * The ending location happens to be the same as the component itself, because it is the topmost component in this application. 2. The `AppModule` acts as the fallback injector when the injection token can't be found in the `ElementInjector` hierarchies. ### Using the `providers` array Now, in the `ChildComponent` class, add a provider for `FlowerService` to demonstrate more complex resolution rules in the upcoming sections: ``` @Component({ selector: 'app-child', templateUrl: './child.component.html', styleUrls: ['./child.component.css'], // use the providers array to provide a service providers: [{ provide: FlowerService, useValue: { emoji: '🌻' } }] }) export class ChildComponent { // inject the service constructor( public flower: FlowerService) { } } ``` Now that the `FlowerService` is provided in the `@[Component](../api/core/component)()` decorator, when the `<app-child>` requests the service, the injector has only to look as far as the `ElementInjector` in the `<app-child>`. It won't have to continue the search any further through the injector tree. The next step is to add a binding to the `ChildComponent` template. ``` <p>Emoji from FlowerService: {{flower.emoji}}</p> ``` To render the new values, add `<app-child>` to the bottom of the `AppComponent` template so the view also displays the sunflower: ``` Child Component Emoji from FlowerService: 🌻 ``` In the logical tree, this is represented as follows: ``` <app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <p>Emoji from FlowerService: {{flower.emoji}} (🌺)</p> <app-child @Provide(FlowerService="🌻") @Inject(FlowerService)=>"🌻"> <!-- search ends here --> <#VIEW> <!-- search starts here --> <h2>Parent Component</h2> <p>Emoji from FlowerService: {{flower.emoji}} (🌻)</p> </#VIEW> </app-child> </#VIEW> </app-root> ``` When `<app-child>` requests the `FlowerService`, the injector begins its search at the `<#VIEW>` belonging to `<app-child>` (`<#VIEW>` is included because it is injected from `@[Component](../api/core/component)()`) and ends with `<app-child>`. In this case, the `FlowerService` is resolved in the `providers` array with sunflower `🌻` of the `<app-child>`. The injector doesn't have to look any further in the injector tree. It stops as soon as it finds the `FlowerService` and never sees the red hibiscus `🌺`. ### Using the `viewProviders` array Use the `viewProviders` array as another way to provide services in the `@[Component](../api/core/component)()` decorator. Using `viewProviders` makes services visible in the `<#VIEW>`. > The steps are the same as using the `providers` array, with the exception of using the `viewProviders` array instead. > > For step-by-step instructions, continue with this section. If you can set it up on your own, skip ahead to [Modifying service availability](hierarchical-dependency-injection#modify-visibility). > > The example application features a second service, the `AnimalService` to demonstrate `viewProviders`. First, create an `AnimalService` with an `emoji` property of whale `🐳`: ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class AnimalService { emoji = '🐳'; } ``` Following the same pattern as with the `FlowerService`, inject the `AnimalService` in the `AppComponent` class: ``` export class AppComponent { constructor(public flower: FlowerService, public animal: AnimalService) {} } ``` > **NOTE**: You can leave all the `FlowerService` related code in place as it will allow a comparison with the `AnimalService`. > > Add a `viewProviders` array and inject the `AnimalService` in the `<app-child>` class, too, but give `emoji` a different value. Here, it has a value of dog `🐶`. ``` @Component({ selector: 'app-child', templateUrl: './child.component.html', styleUrls: ['./child.component.css'], // provide services providers: [{ provide: FlowerService, useValue: { emoji: '🌻' } }], viewProviders: [{ provide: AnimalService, useValue: { emoji: '🐶' } }] }) export class ChildComponent { // inject service constructor( public flower: FlowerService, public animal: AnimalService) { } } ``` Add bindings to the `ChildComponent` and the `AppComponent` templates. In the `ChildComponent` template, add the following binding: ``` <p>Emoji from AnimalService: {{animal.emoji}}</p> ``` Additionally, add the same to the `AppComponent` template: ``` <p>Emoji from AnimalService: {{animal.emoji}}</p> ``` Now you should see both values in the browser: ``` AppComponent Emoji from AnimalService: 🐳 Child Component Emoji from AnimalService: 🐶 ``` The logic tree for this example of `viewProviders` is as follows: ``` <app-root @NgModule(AppModule) @Inject(AnimalService) animal=>"🐳"> <#VIEW> <app-child> <#VIEW @Provide(AnimalService="🐶") @Inject(AnimalService=>"🐶")> <!-- ^^using viewProviders means AnimalService is available in <#VIEW>--> <p>Emoji from AnimalService: {{animal.emoji}} (🐶)</p> </#VIEW> </app-child> </#VIEW> </app-root> ``` Just as with the `FlowerService` example, the `AnimalService` is provided in the `<app-child>` `@[Component](../api/core/component)()` decorator. This means that since the injector first looks in the `ElementInjector` of the component, it finds the `AnimalService` value of dog `🐶`. It doesn't need to continue searching the `ElementInjector` tree, nor does it need to search the `ModuleInjector`. ### `providers` vs. `viewProviders` To see the difference between using `providers` and `viewProviders`, add another component to the example and call it `InspectorComponent`. `InspectorComponent` will be a child of the `ChildComponent`. In `inspector.component.ts`, inject the `FlowerService` and `AnimalService` in the constructor: ``` export class InspectorComponent { constructor(public flower: FlowerService, public animal: AnimalService) { } } ``` You do not need a `providers` or `viewProviders` array. Next, in `inspector.component.html`, add the same markup from previous components: ``` <p>Emoji from FlowerService: {{flower.emoji}}</p> <p>Emoji from AnimalService: {{animal.emoji}}</p> ``` Remember to add the `InspectorComponent` to the `AppModule` `declarations` array. ``` @NgModule({ imports: [ BrowserModule, FormsModule ], declarations: [ AppComponent, ChildComponent, InspectorComponent ], bootstrap: [ AppComponent ], providers: [] }) export class AppModule { } ``` Next, make sure your `child.component.html` contains the following: ``` <p>Emoji from FlowerService: {{flower.emoji}}</p> <p>Emoji from AnimalService: {{animal.emoji}}</p> <div class="container"> <h3>Content projection</h3> <ng-content></ng-content> </div> <h3>Inside the view</h3> <app-inspector></app-inspector> ``` The first two lines, with the bindings, are there from previous steps. The new parts are `[<ng-content>](../api/core/ng-content)` and `<app-inspector>`. `[<ng-content>](../api/core/ng-content)` allows you to project content, and `<app-inspector>` inside the `ChildComponent` template makes the `InspectorComponent` a child component of `ChildComponent`. Next, add the following to `app.component.html` to take advantage of content projection. ``` <app-child><app-inspector></app-inspector></app-child> ``` The browser now renders the following, omitting the previous examples for brevity: ``` //…Omitting previous examples. The following applies to this section. Content projection: this is coming from content. Doesn't get to see puppy because the puppy is declared inside the view only. Emoji from FlowerService: 🌻 Emoji from AnimalService: 🐳 Emoji from FlowerService: 🌻 Emoji from AnimalService: 🐶 ``` These four bindings demonstrate the difference between `providers` and `viewProviders`. Since the dog `🐶` is declared inside the `<#VIEW>`, it isn't visible to the projected content. Instead, the projected content sees the whale `🐳`. The next section though, where `InspectorComponent` is a child component of `ChildComponent`, `InspectorComponent` is inside the `<#VIEW>`, so when it asks for the `AnimalService`, it sees the dog `🐶`. The `AnimalService` in the logical tree would look like this: ``` <app-root @NgModule(AppModule) @Inject(AnimalService) animal=>"🐳"> <#VIEW> <app-child> <#VIEW @Provide(AnimalService="🐶") @Inject(AnimalService=>"🐶")> <!-- ^^using viewProviders means AnimalService is available in <#VIEW>--> <p>Emoji from AnimalService: {{animal.emoji}} (🐶)</p> <div class="container"> <h3>Content projection</h3> <app-inspector @Inject(AnimalService) animal=>"🐳"> <p>Emoji from AnimalService: {{animal.emoji}} (🐳)</p> </app-inspector> </div> </#VIEW> <app-inspector> <#VIEW> <p>Emoji from AnimalService: {{animal.emoji}} (🐶)</p> </#VIEW> </app-inspector> </app-child> </#VIEW> </app-root> ``` The projected content of `<app-inspector>` sees the whale `🐳`, not the dog `🐶`, because the dog `🐶` is inside the `<app-child>` `<#VIEW>`. The `<app-inspector>` can only see the dog `🐶` if it is also within the `<#VIEW>`. Modifying service visibility ---------------------------- This section describes how to limit the scope of the beginning and ending `ElementInjector` using the visibility decorators `@[Host](../api/core/host)()`, `@[Self](../api/core/self)()`, and `@[SkipSelf](../api/core/skipself)()`. ### Visibility of provided tokens Visibility decorators influence where the search for the injection token begins and ends in the logic tree. To do this, place visibility decorators at the point of injection, that is, the `constructor()`, rather than at a point of declaration. To alter where the injector starts looking for `FlowerService`, add `@[SkipSelf](../api/core/skipself)()` to the `<app-child>` `@[Inject](../api/core/inject)` declaration for the `FlowerService`. This declaration is in the `<app-child>` constructor as shown in `child.component.ts`: ``` constructor(@SkipSelf() public flower : FlowerService) { } ``` With `@[SkipSelf](../api/core/skipself)()`, the `<app-child>` injector doesn't look to itself for the `FlowerService`. Instead, the injector starts looking for the `FlowerService` at the `ElementInjector` or the `<app-root>`, where it finds nothing. Then, it goes back to the `<app-child>` `ModuleInjector` and finds the red hibiscus `🌺` value, which is available because the `<app-child>` `ModuleInjector` and the `<app-root>` `ModuleInjector` are flattened into one `ModuleInjector`. Thus, the UI renders the following: ``` Emoji from FlowerService: 🌺 ``` In a logical tree, this same idea might look like this: ``` <app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <app-child @Provide(FlowerService="🌻")> <#VIEW @Inject(FlowerService, SkipSelf)=>"🌺"> <!-- With SkipSelf, the injector looks to the next injector up the tree --> </#VIEW> </app-child> </#VIEW> </app-root> ``` Though `<app-child>` provides the sunflower `🌻`, the application renders the red hibiscus `🌺` because `@[SkipSelf](../api/core/skipself)()` causes the current injector to skip itself and look to its parent. If you now add `@[Host](../api/core/host)()` (in addition to the `@[SkipSelf](../api/core/skipself)()`) to the `@[Inject](../api/core/inject)` of the `FlowerService`, the result will be `null`. This is because `@[Host](../api/core/host)()` limits the upper bound of the search to the `<#VIEW>`. Here's the idea in the logical tree: ``` <app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <!-- end search here with null--> <app-child @Provide(FlowerService="🌻")> <!-- start search here --> <#VIEW @Inject(FlowerService, @SkipSelf, @Host, @Optional)=>null> </#VIEW> </app-parent> </#VIEW> </app-root> ``` Here, the services and their values are the same, but `@[Host](../api/core/host)()` stops the injector from looking any further than the `<#VIEW>` for `FlowerService`, so it doesn't find it and returns `null`. > **NOTE**: The example application uses `@[Optional](../api/core/optional)()` so the application does not throw an error, but the principles are the same. > > ### `@[SkipSelf](../api/core/skipself)()` and `viewProviders` The `<app-child>` currently provides the `AnimalService` in the `viewProviders` array with the value of dog `🐶`. Because the injector has only to look at the `ElementInjector` of the `<app-child>` for the `AnimalService`, it never sees the whale `🐳`. As in the `FlowerService` example, if you add `@[SkipSelf](../api/core/skipself)()` to the constructor for the `AnimalService`, the injector won't look in the `ElementInjector` or the current `<app-child>` for the `AnimalService`. ``` export class ChildComponent { // add @SkipSelf() constructor(@SkipSelf() public animal : AnimalService) { } } ``` Instead, the injector will begin at the `<app-root>` `ElementInjector`. Remember that the `<app-child>` class provides the `AnimalService` in the `viewProviders` array with a value of dog `🐶`: ``` @Component({ selector: 'app-child', … viewProviders: [{ provide: AnimalService, useValue: { emoji: '🐶' } }] }) ``` The logical tree looks like this with `@[SkipSelf](../api/core/skipself)()` in `<app-child>`: ``` <app-root @NgModule(AppModule) @Inject(AnimalService=>"🐳")> <#VIEW><!-- search begins here --> <app-child> <#VIEW @Provide(AnimalService="🐶") @Inject(AnimalService, SkipSelf=>"🐳")> <!--Add @SkipSelf --> </#VIEW> </app-child> </#VIEW> </app-root> ``` With `@[SkipSelf](../api/core/skipself)()` in the `<app-child>`, the injector begins its search for the `AnimalService` in the `<app-root>` `ElementInjector` and finds whale `🐳`. ### `@[Host](../api/core/host)()` and `viewProviders` If you add `@[Host](../api/core/host)()` to the constructor for `AnimalService`, the result is dog `🐶` because the injector finds the `AnimalService` in the `<app-child>` `<#VIEW>`. Here is the `viewProviders` array in the `<app-child>` class and `@[Host](../api/core/host)()` in the constructor: ``` @Component({ selector: 'app-child', … viewProviders: [{ provide: AnimalService, useValue: { emoji: '🐶' } }] }) export class ChildComponent { constructor(@Host() public animal : AnimalService) { } } ``` `@[Host](../api/core/host)()` causes the injector to look until it encounters the edge of the `<#VIEW>`. ``` <app-root @NgModule(AppModule) @Inject(AnimalService=>"🐳")> <#VIEW> <app-child> <#VIEW @Provide(AnimalService="🐶") @Inject(AnimalService, @Host=>"🐶")> <!-- @Host stops search here --> </#VIEW> </app-child> </#VIEW> </app-root> ``` Add a `viewProviders` array with a third animal, hedgehog `🦔`, to the `app.component.ts` `@[Component](../api/core/component)()` metadata: ``` @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ], viewProviders: [{ provide: AnimalService, useValue: { emoji: '🦔' } }] }) ``` Next, add `@[SkipSelf](../api/core/skipself)()` along with `@[Host](../api/core/host)()` to the constructor for the `Animal Service` in `child.component.ts`. Here are `@[Host](../api/core/host)()` and `@[SkipSelf](../api/core/skipself)()` in the `<app-child>` constructor: ``` export class ChildComponent { constructor( @Host() @SkipSelf() public animal : AnimalService) { } } ``` When `@[Host](../api/core/host)()` and `[SkipSelf](../api/core/skipself)()` were applied to the `FlowerService`, which is in the `providers` array, the result was `null` because `@[SkipSelf](../api/core/skipself)()` starts its search in the `<app-child>` injector, but `@[Host](../api/core/host)()` stops searching at `<#VIEW>` —where there is no `FlowerService` In the logical tree, you can see that the `FlowerService` is visible in `<app-child>`, not its `<#VIEW>`. However, the `AnimalService`, which is provided in the `AppComponent` `viewProviders` array, is visible. The logical tree representation shows why this is: ``` <app-root @NgModule(AppModule) @Inject(AnimalService=>"🐳")> <#VIEW @Provide(AnimalService="🦔") @Inject(AnimalService, @Optional)=>"🦔"> <!-- ^^@SkipSelf() starts here, @Host() stops here^^ --> <app-child> <#VIEW @Provide(AnimalService="🐶") @Inject(AnimalService, @SkipSelf, @Host, @Optional)=>"🦔"> <!-- Add @SkipSelf ^^--> </#VIEW> </app-child> </#VIEW> </app-root> ``` `@[SkipSelf](../api/core/skipself)()`, causes the injector to start its search for the `AnimalService` at the `<app-root>`, not the `<app-child>`, where the request originates, and `@[Host](../api/core/host)()` stops the search at the `<app-root>` `<#VIEW>`. Since `AnimalService` is provided by way of the `viewProviders` array, the injector finds hedgehog `🦔` in the `<#VIEW>`. `ElementInjector` use case examples ------------------------------------ The ability to configure one or more providers at different levels opens up useful possibilities. For a look at the following scenarios in a working app, see the heroes use case examples. ### Scenario: service isolation Architectural reasons may lead you to restrict access to a service to the application domain where it belongs. For example, the guide sample includes a `VillainsListComponent` that displays a list of villains. It gets those villains from a `VillainsService`. If you provided `VillainsService` in the root `AppModule` (where you registered the `HeroesService`), that would make the `VillainsService` visible everywhere in the application, including the *Hero* workflows. If you later modified the `VillainsService`, you could break something in a hero component somewhere. Instead, you can provide the `VillainsService` in the `providers` metadata of the `VillainsListComponent` like this: ``` @Component({ selector: 'app-villains-list', templateUrl: './villains-list.component.html', providers: [ VillainsService ] }) ``` By providing `VillainsService` in the `VillainsListComponent` metadata and nowhere else, the service becomes available only in the `VillainsListComponent` and its subcomponent tree. `VillainService` is a singleton with respect to `VillainsListComponent` because that is where it is declared. As long as `VillainsListComponent` does not get destroyed it will be the same instance of `VillainService` but if there are multiple instances of `VillainsListComponent`, then each instance of `VillainsListComponent` will have its own instance of `VillainService`. ### Scenario: multiple edit sessions Many applications allow users to work on several open tasks at the same time. For example, in a tax preparation application, the preparer could be working on several tax returns, switching from one to the other throughout the day. To demonstrate that scenario, imagine an outer `HeroListComponent` that displays a list of super heroes. To open a hero's tax return, the preparer clicks on a hero name, which opens a component for editing that return. Each selected hero tax return opens in its own component and multiple returns can be open at the same time. Each tax return component has the following characteristics: * Is its own tax return editing session * Can change a tax return without affecting a return in another component * Has the ability to save the changes to its tax return or cancel them Suppose that the `HeroTaxReturnComponent` had logic to manage and restore changes. That would be a straightforward task for a hero tax return. In the real world, with a rich tax return data model, the change management would be tricky. You could delegate that management to a helper service, as this example does. The `HeroTaxReturnService` caches a single `HeroTaxReturn`, tracks changes to that return, and can save or restore it. It also delegates to the application-wide singleton `HeroService`, which it gets by injection. ``` import { Injectable } from '@angular/core'; import { HeroTaxReturn } from './hero'; import { HeroesService } from './heroes.service'; @Injectable() export class HeroTaxReturnService { private currentTaxReturn!: HeroTaxReturn; private originalTaxReturn!: HeroTaxReturn; constructor(private heroService: HeroesService) { } set taxReturn(htr: HeroTaxReturn) { this.originalTaxReturn = htr; this.currentTaxReturn = htr.clone(); } get taxReturn(): HeroTaxReturn { return this.currentTaxReturn; } restoreTaxReturn() { this.taxReturn = this.originalTaxReturn; } saveTaxReturn() { this.taxReturn = this.currentTaxReturn; this.heroService.saveTaxReturn(this.currentTaxReturn).subscribe(); } } ``` Here is the `HeroTaxReturnComponent` that makes use of `HeroTaxReturnService`. ``` import { Component, EventEmitter, Input, Output } from '@angular/core'; import { HeroTaxReturn } from './hero'; import { HeroTaxReturnService } from './hero-tax-return.service'; @Component({ selector: 'app-hero-tax-return', templateUrl: './hero-tax-return.component.html', styleUrls: [ './hero-tax-return.component.css' ], providers: [ HeroTaxReturnService ] }) export class HeroTaxReturnComponent { message = ''; @Output() close = new EventEmitter<void>(); get taxReturn(): HeroTaxReturn { return this.heroTaxReturnService.taxReturn; } @Input() set taxReturn(htr: HeroTaxReturn) { this.heroTaxReturnService.taxReturn = htr; } constructor(private heroTaxReturnService: HeroTaxReturnService) { } onCanceled() { this.flashMessage('Canceled'); this.heroTaxReturnService.restoreTaxReturn(); } onClose() { this.close.emit(); } onSaved() { this.flashMessage('Saved'); this.heroTaxReturnService.saveTaxReturn(); } flashMessage(msg: string) { this.message = msg; setTimeout(() => this.message = '', 500); } } ``` The *tax-return-to-edit* arrives by way of the `@[Input](../api/core/input)()` property, which is implemented with getters and setters. The setter initializes the component's own instance of the `HeroTaxReturnService` with the incoming return. The getter always returns what that service says is the current state of the hero. The component also asks the service to save and restore this tax return. This won't work if the service is an application-wide singleton. Every component would share the same service instance, and each component would overwrite the tax return that belonged to another hero. To prevent this, configure the component-level injector of `HeroTaxReturnComponent` to provide the service, using the `providers` property in the component metadata. ``` providers: [ HeroTaxReturnService ] ``` The `HeroTaxReturnComponent` has its own provider of the `HeroTaxReturnService`. Recall that every component *instance* has its own injector. Providing the service at the component level ensures that *every* instance of the component gets a private instance of the service. This makes sure that no tax return gets overwritten. > The rest of the scenario code relies on other Angular features and techniques that you can learn about elsewhere in the documentation. You can review it and download it from the live example. > > ### Scenario: specialized providers Another reason to provide a service again at another level is to substitute a *more specialized* implementation of that service, deeper in the component tree. For example, consider a `Car` component that includes tire service information and depends on other services to provide more details about the car. The root injector, marked as (A), uses *generic* providers for details about `CarService` and `EngineService`. 1. `Car` component (A). Component (A) displays tire service data about a car and specifies generic services to provide more information about the car. 2. Child component (B). Component (B) defines its own, *specialized* providers for `CarService` and `EngineService` that have special capabilities suitable for what's going on in component (B). 3. Child component (C) as a child of Component (B). Component (C) defines its own, even *more specialized* provider for `CarService`. Behind the scenes, each component sets up its own injector with zero, one, or more providers defined for that component itself. When you resolve an instance of `Car` at the deepest component (C), its injector produces: * An instance of `Car` resolved by injector (C) * An `Engine` resolved by injector (B) * Its `Tires` resolved by the root injector (A). More on dependency injection ---------------------------- For more information on Angular dependency injection, see the [DI Providers](dependency-injection-providers) and [DI in Action](dependency-injection-in-action) guides. Last reviewed on Mon Feb 28 2022
programming_docs
angular The RxJS library The RxJS library ================ Reactive programming is an asynchronous programming paradigm concerned with data streams and the propagation of change ([Wikipedia](https://en.wikipedia.org/wiki/Reactive_programming)). RxJS (Reactive Extensions for JavaScript) is a library for reactive programming using observables that makes it easier to compose asynchronous or callback-based code. See ([RxJS Docs](https://rxjs.dev/guide/overview)). RxJS provides an implementation of the `Observable` type, which is needed until the type becomes part of the language and until browsers support it. The library also provides utility functions for creating and working with observables. These utility functions can be used for: * Converting existing code for async operations into observables * Iterating through the values in a stream * Mapping values to different types * Filtering streams * Composing multiple streams Observable creation functions ----------------------------- RxJS offers a number of functions that can be used to create new observables. These functions can simplify the process of creating observables from things such as events, timers, and promises. For example: ``` import { from, Observable } from 'rxjs'; // Create an Observable out of a promise const data = from(fetch('/api/endpoint')); // Subscribe to begin listening for async result data.subscribe({ next(response) { console.log(response); }, error(err) { console.error('Error: ' + err); }, complete() { console.log('Completed'); } }); ``` ``` import { interval } from 'rxjs'; // Create an Observable that will publish a value on an interval const secondsCounter = interval(1000); // Subscribe to begin publishing values const subscription = secondsCounter.subscribe(n => console.log(`It's been ${n + 1} seconds since subscribing!`)); ``` ``` import { fromEvent } from 'rxjs'; const el = document.getElementById('my-element')!; // Create an Observable that will publish mouse movements const mouseMoves = fromEvent<MouseEvent>(el, 'mousemove'); // Subscribe to start listening for mouse-move events const subscription = mouseMoves.subscribe(evt => { // Log coords of mouse movements console.log(`Coords: ${evt.clientX} X ${evt.clientY}`); // When the mouse is over the upper-left of the screen, // unsubscribe to stop listening for mouse movements if (evt.clientX < 40 && evt.clientY < 40) { subscription.unsubscribe(); } }); ``` ``` import { Observable } from 'rxjs'; import { ajax } from 'rxjs/ajax'; // Create an Observable that will create an AJAX request const apiData = ajax('/api/data'); // Subscribe to create the request apiData.subscribe(res => console.log(res.status, res.response)); ``` Operators --------- Operators are functions that build on the observables foundation to enable sophisticated manipulation of collections. For example, RxJS defines operators such as `map()`, `filter()`, `concat()`, and `flatMap()`. Operators take configuration options, and they return a function that takes a source observable. When executing this returned function, the operator observes the source observable's emitted values, transforms them, and returns a new observable of those transformed values. Here is a simple example: ``` import { of } from 'rxjs'; import { map } from 'rxjs/operators'; const nums = of(1, 2, 3); const squareValues = map((val: number) => val * val); const squaredNums = squareValues(nums); squaredNums.subscribe(x => console.log(x)); // Logs // 1 // 4 // 9 ``` You can use *pipes* to link operators together. Pipes let you combine multiple functions into a single function. The `pipe()` function takes as its arguments the functions you want to combine, and returns a new function that, when executed, runs the composed functions in sequence. A set of operators applied to an observable is a recipe —that is, a set of instructions for producing the values you're interested in. By itself, the recipe doesn't do anything. You need to call `subscribe()` to produce a result through the recipe. Here's an example: ``` import { of, pipe } from 'rxjs'; import { filter, map } from 'rxjs/operators'; const nums = of(1, 2, 3, 4, 5); // Create a function that accepts an Observable. const squareOddVals = pipe( filter((n: number) => n % 2 !== 0), map(n => n * n) ); // Create an Observable that will run the filter and map functions const squareOdd = squareOddVals(nums); // Subscribe to run the combined functions squareOdd.subscribe(x => console.log(x)); ``` The `pipe()` function is also a method on the RxJS `Observable`, so you use this shorter form to define the same operation: ``` import { of } from 'rxjs'; import { filter, map } from 'rxjs/operators'; const squareOdd = of(1, 2, 3, 4, 5) .pipe( filter(n => n % 2 !== 0), map(n => n * n) ); // Subscribe to get values squareOdd.subscribe(x => console.log(x)); ``` ### Common operators RxJS provides many operators, but only a handful are used frequently. For a list of operators and usage samples, visit the [RxJS API Documentation](https://rxjs.dev/api). > **NOTE**: For Angular applications, we prefer combining operators with pipes, rather than chaining. Chaining is used in many RxJS examples. > > | Area | Operators | | --- | --- | | Creation | `from`, `fromEvent`, `of` | | Combination | `combineLatest`, `concat`, `merge`, `startWith` , `withLatestFrom`, `zip` | | Filtering | `debounceTime`, `distinctUntilChanged`, `filter`, `take`, `takeUntil` | | Transformation | `bufferTime`, `concatMap`, `map`, `mergeMap`, `scan`, `switchMap` | | Utility | `tap` | | Multicasting | `share` | Error handling -------------- In addition to the `error()` handler that you provide on subscription, RxJS provides the `catchError` operator that lets you handle known errors in the observable recipe. For instance, suppose you have an observable that makes an API request and maps to the response from the server. If the server returns an error or the value doesn't exist, an error is produced. If you catch this error and supply a default value, your stream continues to process values rather than erroring out. Here's an example of using the `catchError` operator to do this: ``` import { Observable, of } from 'rxjs'; import { ajax } from 'rxjs/ajax'; import { map, catchError } from 'rxjs/operators'; // Return "response" from the API. If an error happens, // return an empty array. const apiData = ajax('/api/data').pipe( map((res: any) => { if (!res.response) { throw new Error('Value expected!'); } return res.response; }), catchError(() => of([])) ); apiData.subscribe({ next(x: T) { console.log('data: ', x); }, error() { console.log('errors already caught... will not run'); } }); ``` ### Retry failed observable Where the `catchError` operator provides a simple path of recovery, the `retry` operator lets you retry a failed request. Use the `retry` operator before the `catchError` operator. It resubscribes to the original source observable, which can then re-run the full sequence of actions that resulted in the error. If this includes an HTTP request, it will retry that HTTP request. The following converts the previous example to retry the request before catching the error: ``` import { Observable, of } from 'rxjs'; import { ajax } from 'rxjs/ajax'; import { map, retry, catchError } from 'rxjs/operators'; const apiData = ajax('/api/data').pipe( map((res: any) => { if (!res.response) { console.log('Error occurred.'); throw new Error('Value expected!'); } return res.response; }), retry(3), // Retry up to 3 times before failing catchError(() => of([])) ); apiData.subscribe({ next(x: T) { console.log('data: ', x); }, error() { console.log('errors already caught... will not run'); } }); ``` > Do not retry **authentication** requests, since these should only be initiated by user action. We don't want to lock out user accounts with repeated login requests that the user has not initiated. > > Naming conventions for observables ---------------------------------- Because Angular applications are mostly written in TypeScript, you will typically know when a variable is an observable. Although the Angular framework does not enforce a naming convention for observables, you will often see observables named with a trailing "$" sign. This can be useful when scanning through code and looking for observable values. Also, if you want a property to store the most recent value from an observable, it can be convenient to use the same name with or without the "$". For example: ``` import { Component } from '@angular/core'; import { Observable } from 'rxjs'; @Component({ selector: 'app-stopwatch', templateUrl: './stopwatch.component.html' }) export class StopwatchComponent { stopwatchValue = 0; stopwatchValue$!: Observable<number>; start() { this.stopwatchValue$.subscribe(num => this.stopwatchValue = num ); } } ``` Last reviewed on Mon Feb 28 2022 angular Security Security ======== This topic describes Angular's built-in protections against common web-application vulnerabilities and attacks such as cross-site scripting attacks. It doesn't cover application-level security, such as authentication and authorization. For more information about the attacks and mitigations described below, see the [Open Web Application Security Project (OWASP) Guide](https://www.owasp.org/index.php/Category:OWASP_Guide_Project). You can run the live example in Stackblitz and download the code from there. Angular is part of Google [Open Source Software Vulnerability Reward Program](https://bughunters.google.com/about/rules/6521337925468160/google-open-source-software-vulnerability-reward-program-rules), for vulnerabilities in Angular please submit your report [here](https://bughunters.google.com/report). For more information about how Google handles security issues, see [Google's security philosophy](https://www.google.com/about/appsecurity). | Practices | Details | | --- | --- | | Keep current with the latest Angular library releases | The Angular libraries get regular updates, and these updates might fix security defects discovered in previous versions. Check the Angular [change log](https://github.com/angular/angular/blob/main/CHANGELOG.md) for security-related updates. | | Don't alter your copy of Angular | Private, customized versions of Angular tend to fall behind the current version and might not include important security fixes and enhancements. Instead, share your Angular improvements with the community and make a pull request. | | Avoid Angular APIs marked in the documentation as "*Security Risk*" | For more information, see the [Trusting safe values](security#bypass-security-apis) section of this page. | Preventing cross-site scripting (XSS) ------------------------------------- [Cross-site scripting (XSS)](https://en.wikipedia.org/wiki/Cross-site_scripting) enables attackers to inject malicious code into web pages. Such code can then, for example, steal user and login data, or perform actions that impersonate the user. This is one of the most common attacks on the web. To block XSS attacks, you must prevent malicious code from entering the Document Object Model (DOM). For example, if attackers can trick you into inserting a `<script>` tag in the DOM, they can run arbitrary code on your website. The attack isn't limited to `<script>` tags —many elements and properties in the DOM allow code execution, for example, `<[img](../api/common/ngoptimizedimage) alt="" onerror="...">` and `<a href="javascript:...">`. If attacker-controlled data enters the DOM, expect security vulnerabilities. ### Angular's cross-site scripting security model To systematically block XSS bugs, Angular treats all values as untrusted by default. When a value is inserted into the DOM from a template binding, or interpolation, Angular sanitizes and escapes untrusted values. If a value was already sanitized outside of Angular and is considered safe, communicate this to Angular by marking the [value as trusted](security#bypass-security-apis). Unlike values to be used for rendering, Angular templates are considered trusted by default, and should be treated as executable code. Never create templates by concatenating user input and template syntax. Doing this would enable attackers to [inject arbitrary code](https://en.wikipedia.org/wiki/Code_injection) into your application. To prevent these vulnerabilities, always use the default [Ahead-Of-Time (AOT) template compiler](security#offline-template-compiler) in production deployments. An extra layer of protection can be provided through the use of Content security policy and Trusted Types. These web platform features operate at the DOM level which is the most effective place to prevent XSS issues. Here they can't be bypassed using other, lower-level APIs. For this reason, it is strongly encouraged to take advantage of these features. To do this, configure the [content security policy](security#content-security-policy) for the application and enable [trusted types enforcement](security#trusted-types). ### Sanitization and security contexts *Sanitization* is the inspection of an untrusted value, turning it into a value that's safe to insert into the DOM. In many cases, sanitization doesn't change a value at all. Sanitization depends on context: A value that's harmless in CSS is potentially dangerous in a URL. Angular defines the following security contexts: | Security contexts | Details | | --- | --- | | HTML | Used when interpreting a value as HTML, for example, when binding to `innerHtml`. | | Style | Used when binding CSS into the `[style](../api/animations/style)` property. | | URL | Used for URL properties, such as `<a href>`. | | Resource URL | A URL that is loaded and executed as code, for example, in `<script src>`. | Angular sanitizes untrusted values for HTML, styles, and URLs. Sanitizing resource URLs isn't possible because they contain arbitrary code. In development mode, Angular prints a console warning when it has to change a value during sanitization. ### Sanitization example The following template binds the value of `htmlSnippet`. Once by interpolating it into an element's content, and once by binding it to the `innerHTML` property of an element: ``` <h3>Binding innerHTML</h3> <p>Bound value:</p> <p class="e2e-inner-html-interpolated">{{htmlSnippet}}</p> <p>Result of binding to innerHTML:</p> <p class="e2e-inner-html-bound" [innerHTML]="htmlSnippet"></p> ``` Interpolated content is always escaped —the HTML isn't interpreted and the browser displays angle brackets in the element's text content. For the HTML to be interpreted, bind it to an HTML property such as `innerHTML`. Be aware that binding a value that an attacker might control into `innerHTML` normally causes an XSS vulnerability. For example, one could run JavaScript in a following way: ``` export class InnerHtmlBindingComponent { // For example, a user/attacker-controlled value from a URL. htmlSnippet = 'Template <script>alert("0wned")</script> <b>Syntax</b>'; } ``` Angular recognizes the value as unsafe and automatically sanitizes it, which removes the `script` element but keeps safe content such as the `<b>` element. ### Direct use of the DOM APIs and explicit sanitization calls Unless you enforce Trusted Types, the built-in browser DOM APIs don't automatically protect you from security vulnerabilities. For example, `document`, the node available through `[ElementRef](../api/core/elementref)`, and many third-party APIs contain unsafe methods. Likewise, if you interact with other libraries that manipulate the DOM, you likely won't have the same automatic sanitization as with Angular interpolations. Avoid directly interacting with the DOM and instead use Angular templates where possible. For cases where this is unavoidable, use the built-in Angular sanitization functions. Sanitize untrusted values with the [DomSanitizer.sanitize](../api/platform-browser/domsanitizer#sanitize) method and the appropriate `[SecurityContext](../api/core/securitycontext)`. That function also accepts values that were marked as trusted using the `bypassSecurityTrust` … functions, and does not sanitize them, as [described below](security#bypass-security-apis). ### Trusting safe values Sometimes applications genuinely need to include executable code, display an `<iframe>` from some URL, or construct potentially dangerous URLs. To prevent automatic sanitization in these situations, tell Angular that you inspected a value, checked how it was created, and made sure it is secure. Do *be careful*. If you trust a value that might be malicious, you are introducing a security vulnerability into your application. If in doubt, find a professional security reviewer. To mark a value as trusted, inject `[DomSanitizer](../api/platform-browser/domsanitizer)` and call one of the following methods: * `bypassSecurityTrustHtml` * `bypassSecurityTrustScript` * `bypassSecurityTrustStyle` * `bypassSecurityTrustUrl` * `bypassSecurityTrustResourceUrl` Remember, whether a value is safe depends on context, so choose the right context for your intended use of the value. Imagine that the following template needs to bind a URL to a `javascript:alert(...)` call: ``` <h4>An untrusted URL:</h4> <p><a class="e2e-dangerous-url" [href]="dangerousUrl">Click me</a></p> <h4>A trusted URL:</h4> <p><a class="e2e-trusted-url" [href]="trustedUrl">Click me</a></p> ``` Normally, Angular automatically sanitizes the URL, disables the dangerous code, and in development mode, logs this action to the console. To prevent this, mark the URL value as a trusted URL using the `bypassSecurityTrustUrl` call: ``` constructor(private sanitizer: DomSanitizer) { // javascript: URLs are dangerous if attacker controlled. // Angular sanitizes them in data binding, but you can // explicitly tell Angular to trust this value: this.dangerousUrl = 'javascript:alert("Hi there")'; this.trustedUrl = sanitizer.bypassSecurityTrustUrl(this.dangerousUrl); ``` If you need to convert user input into a trusted value, use a component method. The following template lets users enter a YouTube video ID and load the corresponding video in an `<iframe>`. The `<iframe src>` attribute is a resource URL security context, because an untrusted source can, for example, smuggle in file downloads that unsuspecting users could run. To prevent this, call a method on the component to construct a trusted video URL, which causes Angular to let binding into `<iframe src>`: ``` <h4>Resource URL:</h4> <p>Showing: {{dangerousVideoUrl}}</p> <p>Trusted:</p> <iframe class="e2e-iframe-trusted-src" width="640" height="390" [src]="videoUrl"></iframe> <p>Untrusted:</p> <iframe class="e2e-iframe-untrusted-src" width="640" height="390" [src]="dangerousVideoUrl"></iframe> ``` ``` updateVideoUrl(id: string) { // Appending an ID to a YouTube URL is safe. // Always make sure to construct SafeValue objects as // close as possible to the input data so // that it's easier to check if the value is safe. this.dangerousVideoUrl = 'https://www.youtube.com/embed/' + id; this.videoUrl = this.sanitizer.bypassSecurityTrustResourceUrl(this.dangerousVideoUrl); } ``` ### Content security policy Content Security Policy (CSP) is a defense-in-depth technique to prevent XSS. To enable CSP, configure your web server to return an appropriate `Content-Security-Policy` HTTP header. Read more about content security policy at the [Web Fundamentals guide](https://developers.google.com/web/fundamentals/security/csp) on the Google Developers website. The minimal policy required for brand-new Angular is: ``` default-src 'self'; style-src 'self' 'unsafe-inline'; ``` | Sections | Details | | --- | --- | | `default-src 'self';` | Allows the page to load all its required resources from the same origin. | | `style-src 'self' 'unsafe-inline';` | Allows the page to load global styles from the same origin (`'self'`) and enables components to load their styles (`'unsafe-inline'` - see [`angular/angular#6361`](https://github.com/angular/angular/issues/6361)). | Angular itself requires only these settings to function correctly. As your project grows, you may need to expand your CSP settings to accommodate extra features specific to your application. ### Enforcing Trusted Types It is recommended that you use [Trusted Types](https://w3c.github.io/trusted-types/dist/spec/) as a way to help secure your applications from cross-site scripting attacks. Trusted Types is a [web platform](https://en.wikipedia.org/wiki/Web_platform) feature that can help you prevent cross-site scripting attacks by enforcing safer coding practices. Trusted Types can also help simplify the auditing of application code. Trusted Types might not yet be available in all browsers your application targets. In the case your Trusted-Types-enabled application runs in a browser that doesn't support Trusted Types, the features of the application are preserved. Your application is guarded against XSS by way of Angular's DomSanitizer. See [caniuse.com/trusted-types](https://caniuse.com/trusted-types) for the current browser support. To enforce Trusted Types for your application, you must configure your application's web server to emit HTTP headers with one of the following Angular policies: | Policies | Detail | | --- | --- | | `angular` | This policy is used in security-reviewed code that is internal to Angular, and is required for Angular to function when Trusted Types are enforced. Any inline template values or content sanitized by Angular is treated as safe by this policy. | | `angular#unsafe-bypass` | This policy is used for applications that use any of the methods in Angular's [DomSanitizer](../api/platform-browser/domsanitizer) that bypass security, such as `bypassSecurityTrustHtml`. Any application that uses these methods must enable this policy. | | `angular#unsafe-jit` | This policy is used by the [Just-In-Time (JIT) compiler](../api/core/compiler). You must enable this policy if your application interacts directly with the JIT compiler or is running in JIT mode using the [platform browser dynamic](../api/platform-browser-dynamic/platformbrowserdynamic). | | `angular#bundler` | This policy is used by the Angular CLI bundler when creating lazy chunk files. | You should configure the HTTP headers for Trusted Types in the following locations: * Production serving infrastructure * Angular CLI (`ng serve`), using the `headers` property in the `angular.json` file, for local development and end-to-end testing * Karma (`ng test`), using the `customHeaders` property in the `karma.config.js` file, for unit testing The following is an example of a header specifically configured for Trusted Types and Angular: ``` Content-Security-Policy: trusted-types angular; require-trusted-types-for 'script'; ``` An example of a header specifically configured for Trusted Types and Angular applications that use any of Angular's methods in [DomSanitizer](../api/platform-browser/domsanitizer) that bypasses security: ``` Content-Security-Policy: trusted-types angular angular#unsafe-bypass; require-trusted-types-for 'script'; ``` The following is an example of a header specifically configured for Trusted Types and Angular applications using JIT: ``` Content-Security-Policy: trusted-types angular angular#unsafe-jit; require-trusted-types-for 'script'; ``` The following is an example of a header specifically configured for Trusted Types and Angular applications that use lazy loading of modules: ``` Content-Security-Policy: trusted-types angular angular#bundler; require-trusted-types-for 'script'; ``` To learn more about troubleshooting Trusted Type configurations, the following resource might be helpful: [Prevent DOM-based cross-site scripting vulnerabilities with Trusted Types](https://web.dev/trusted-types/#how-to-use-trusted-types) ### Use the AOT template compiler The AOT template compiler prevents a whole class of vulnerabilities called template injection, and greatly improves application performance. The AOT template compiler is the default compiler used by Angular CLI applications, and you should use it in all production deployments. An alternative to the AOT compiler is the JIT compiler which compiles templates to executable template code within the browser at runtime. Angular trusts template code, so dynamically generating templates and compiling them, in particular templates containing user data, circumvents Angular's built-in protections. This is a security anti-pattern. For information about dynamically constructing forms in a safe way, see the [Dynamic Forms](dynamic-form) guide. ### Server-side XSS protection HTML constructed on the server is vulnerable to injection attacks. Injecting template code into an Angular application is the same as injecting executable code into the application: It gives the attacker full control over the application. To prevent this, use a templating language that automatically escapes values to prevent XSS vulnerabilities on the server. Don't create Angular templates on the server side using a templating language. This carries a high risk of introducing template-injection vulnerabilities. HTTP-level vulnerabilities -------------------------- Angular has built-in support to help prevent two common HTTP vulnerabilities, cross-site request forgery (CSRF or XSRF) and cross-site script inclusion (XSSI). Both of these must be mitigated primarily on the server side, but Angular provides helpers to make integration on the client side easier. ### Cross-site request forgery In a cross-site request forgery (CSRF or XSRF), an attacker tricks the user into visiting a different web page (such as `evil.com`) with malignant code. This web page secretly sends a malicious request to the application's web server (such as `example-bank.com`). Assume the user is logged into the application at `example-bank.com`. The user opens an email and clicks a link to `evil.com`, which opens in a new tab. The `evil.com` page immediately sends a malicious request to `example-bank.com`. Perhaps it's a request to transfer money from the user's account to the attacker's account. The browser automatically sends the `example-bank.com` cookies, including the authentication cookie, with this request. If the `example-bank.com` server lacks XSRF protection, it can't tell the difference between a legitimate request from the application and the forged request from `evil.com`. To prevent this, the application must ensure that a user request originates from the real application, not from a different site. The server and client must cooperate to thwart this attack. In a common anti-XSRF technique, the application server sends a randomly created authentication token in a cookie. The client code reads the cookie and adds a custom request header with the token in all following requests. The server compares the received cookie value to the request header value and rejects the request if the values are missing or don't match. This technique is effective because all browsers implement the *same origin policy*. Only code from the website on which cookies are set can read the cookies from that site and set custom headers on requests to that site. That means only your application can read this cookie token and set the custom header. The malicious code on `evil.com` can't. Angular's `[HttpClient](../api/common/http/httpclient)` has built-in support for the client-side half of this technique. Read about it more in the [HttpClient guide](http#security-xsrf-protection). For information about CSRF at the Open Web Application Security Project (OWASP), see [Cross-Site Request Forgery (CSRF)](https://owasp.org/www-community/attacks/csrf) and [Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html). The Stanford University paper [Robust Defenses for Cross-Site Request Forgery](https://seclab.stanford.edu/websec/csrf/csrf.pdf) is a rich source of detail. See also Dave Smith's [talk on XSRF at AngularConnect 2016](https://www.youtube.com/watch?v=9inczw6qtpY "Cross Site Request Funkery Securing Your Angular Apps From Evil Doers"). ### Cross-site script inclusion (XSSI) Cross-site script inclusion, also known as JSON vulnerability, can allow an attacker's website to read data from a JSON API. The attack works on older browsers by overriding built-in JavaScript object constructors, and then including an API URL using a `<script>` tag. This attack is only successful if the returned JSON is executable as JavaScript. Servers can prevent an attack by prefixing all JSON responses to make them non-executable, by convention, using the well-known string `")]}',\n"`. Angular's `[HttpClient](../api/common/http/httpclient)` library recognizes this convention and automatically strips the string `")]}',\n"` from all responses before further parsing. For more information, see the XSSI section of this [Google web security blog post](https://security.googleblog.com/2011/05/website-security-for-webmasters.html). Auditing Angular applications ----------------------------- Angular applications must follow the same security principles as regular web applications, and must be audited as such. Angular-specific APIs that should be audited in a security review, such as the [*bypassSecurityTrust*](security#bypass-security-apis) methods, are marked in the documentation as security sensitive. Last reviewed on Mon Feb 28 2022
programming_docs
angular Providing dependencies in modules Providing dependencies in modules ================================= A provider is an instruction to the [Dependency Injection](dependency-injection) system on how to obtain a value for a dependency. Most of the time, these dependencies are services that you create and provide. For the final sample application using the provider that this page describes, see the live example. Providing a service ------------------- If you already have an application that was created with the [Angular CLI](cli), you can create a service using the [`ng generate`](cli/generate) CLI command in the root project directory. Replace *User* with the name of your service. ``` ng generate service User ``` This command creates the following `UserService` skeleton: ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root', }) export class UserService { } ``` You can now inject `UserService` anywhere in your application. The service itself is a class that the CLI generated and that's decorated with `@[Injectable](../api/core/injectable)()`. By default, this decorator has a `providedIn` property, which creates a provider for the service. In this case, `providedIn: 'root'` specifies that Angular should provide the service in the root injector. Provider scope -------------- When you add a service provider to the root application injector, it's available throughout the application. Additionally, these providers are also available to all the classes in the application as long they have the lookup token. You should always provide your service in the root injector unless there is a case where you want the service to be available only if the consumer imports a particular `@[NgModule](../api/core/ngmodule)`. `providedIn` and NgModules --------------------------- It's also possible to specify that a service should be provided in a particular `@[NgModule](../api/core/ngmodule)`. For example, if you don't want `UserService` to be available to applications unless they import a `UserModule` you've created, you can specify that the service should be provided in the module: ``` import { Injectable } from '@angular/core'; import { UserModule } from './user.module'; @Injectable({ providedIn: UserModule, }) export class UserService { } ``` The example above shows the preferred way to provide a service in a module. This method is preferred because it enables tree-shaking of the service if nothing injects it. If it's not possible to specify in the service which module should provide it, you can also declare a provider for the service within the module: ``` import { NgModule } from '@angular/core'; import { UserService } from './user.service'; @NgModule({ providers: [UserService], }) export class UserModule { } ``` Limiting provider scope by lazy loading modules ----------------------------------------------- In the basic CLI-generated app, modules are eagerly loaded which means that they are all loaded when the application launches. Angular uses an injector system to make things available between modules. In an eagerly loaded app, the root application injector makes all of the providers in all of the modules available throughout the application. This behavior necessarily changes when you use lazy loading. Lazy loading is when you load modules only when you need them; for example, when routing. They aren't loaded right away like with eagerly loaded modules. This means that any services listed in their provider arrays aren't available because the root injector doesn't know about these modules. When the Angular router lazy-loads a module, it creates a new injector. This injector is a child of the root application injector. Imagine a tree of injectors; there is a single root injector and then a child injector for each lazy loaded module. This child injector gets populated with all the module-specific providers, if any. Look up resolution for every provider follows the [rules of dependency injection hierarchy](hierarchical-dependency-injection#resolution-rules). Any component created within a lazy loaded module's context, such as by router navigation, gets its own local instance of child provided services, not the instance in the root application injector. Components in external modules continue to receive the instances created for the application root injector. Though you can provide services by lazy loading modules, not all services can be lazy loaded. For instance, some modules only work in the root module, such as the Router. The Router works with the global location object in the browser. As of Angular version 9, you can provide a new instance of a service with each lazy loaded module. The following code adds this functionality to `UserService`. ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'any', }) export class UserService { } ``` With `providedIn: 'any'`, all eagerly loaded modules share a singleton instance; however, lazy loaded modules each get their own unique instance, as shown in the following diagram. Limiting provider scope with components --------------------------------------- Another way to limit provider scope is by adding the service you want to limit to the component's `providers` array. Component providers and NgModule providers are independent of each other. This method is helpful when you want to eagerly load a module that needs a service all to itself. Providing a service in the component limits the service only to that component and its descendants. Other components in the same module can't access it. ``` @Component({ /* . . . */ providers: [UserService] }) ``` Providing services in modules vs. components -------------------------------------------- Generally, provide services the whole application needs in the root module and scope services by providing them in lazy loaded modules. The router works at the root level so if you put providers in a component, even `AppComponent`, lazy loaded modules, which rely on the router, can't see them. Register a provider with a component when you must limit a service instance to a component and its component tree, that is, its child components. For example, a user editing component, `UserEditorComponent`, that needs a private copy of a caching `UserService` should register the `UserService` with the `UserEditorComponent`. Then each new instance of the `UserEditorComponent` gets its own cached service instance. Injector hierarchy and service instances ---------------------------------------- Services are singletons within the scope of an injector, which means there is at most one instance of a service in a given injector. Angular DI has a [hierarchical injection system](hierarchical-dependency-injection), which means that nested injectors can create their own service instances. Whenever Angular creates a new instance of a component that has `providers` specified in `@[Component](../api/core/component)()`, it also creates a new child injector for that instance. Similarly, when a new NgModule is lazy-loaded at run time, Angular can create an injector for it with its own providers. Child modules and component injectors are independent of each other, and create their own separate instances of the provided services. When Angular destroys an NgModule or component instance, it also destroys that injector and that injector's service instances. For more information, see [Hierarchical injectors](hierarchical-dependency-injection). More on NgModules ----------------- You may also be interested in: * [Singleton Services](singleton-services), which elaborates on the concepts covered on this page * [Lazy Loading Modules](lazy-loading-ngmodules) * [Dependency providers](dependency-injection-providers) * [NgModule FAQ](ngmodule-faq) Last reviewed on Mon Feb 28 2022 angular Start to edit a documentation topic Start to edit a documentation topic =================================== This topic describes the tasks that you perform when you start to work on a documentation issue. The documentation in angular.io is built from [markdown](https://en.wikipedia.org/wiki/Markdown) source code files. The markdown source code files are stored in the `angular` repo that you forked into your GitHub account. To update the Angular documentation, you need: * A clone of `personal/angular` You created this when you [created your workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer). Before you start editing a topic, [update your clone of `personal/angular`](doc-update-start#update-your-fork-with-the-upstream-repo). * A `working` branch that you create from an up-to-date `main` branch. Creating your `working` branch is described [later in this topic](doc-update-start#create-a-working-branch-for-editing). The procedures in this topic assume that the files on your local computer are organized as illustrated in the following diagram. On your local computer, you should have: * Your 'git' workspace directory. In this example, the path to your 'git' workspace directory is `github-projects`. * Your working directory, which is the directory that you created when you cloned your fork into your `git` workspace. In this example, the path to your working directory is `github-projects/personal/angular`, where `personal` is replaced with your GitHub username. > **IMPORTANT**: Remember to replace `personal` with your GitHub username in the commands and examples in this topic. > > The procedures in this topic assume that you are starting from your workspace directory. Update your fork with the upstream repo --------------------------------------- Before you start editing the documentation files, you want to sync the `main` branch of your fork and its clone with the `main` branch of the upstream `angular/angular` repo. This procedure updates the your `personal/angular` repo in the cloud and its clone on your local computer, as illustrated here. The circled numbers correspond to procedure steps. #### To update your fork and its clone with the upstream repo Perform these steps from a command-line tool on your local computer. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). This step is not shown in the image. Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to check out the `main` branch. This step is not shown in the image. ``` git checkout main ``` 3. Run this command to update the `main` branch in the working directory on your local computer from the upstream `angular/angular` repo. ``` git fetch upstream git merge upstream/main ``` 4. Run this command to update your `personal/angular` repo on `github.com` with the latest from the upstream `angular/angular` repo. ``` git push ``` The `main` branch on your local computer is now in sync with your origin repo on `github.com`. They have been updated with any changes that have been made to the upstream `angular/angular` repo since the last time you updated your fork. Create a working branch for editing ----------------------------------- All your edits to the Angular documentation are made in a `working` branch in the clone of `personal/angular` on your local computer. You create the working branch from the up-to-date `main` branch of `personal/angular` on your local computer. A working branch keeps your changes to the Angular documentation separate from the published documentation until it is ready. A working branch also keeps your edits for one issue separate from those of another issue. Finally, a working branch identifies the changes you made in the pull request that you submit when you're finished. > **IMPORTANT**: Before you edit any Angular documentation, make sure that you are using the correct `working` branch. You can confirm your current branch by running `git status` from your `working` directory before you start editing. > > #### To create a `working` branch for editing Perform these steps in a command-line program on your local computer. 1. [Update your fork of `angular/angular`](doc-update-start#update-your-fork-with-the-upstream-repo). 2. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 3. Run this command to check out the `main` branch. ``` git checkout main ``` 4. Run this command to create your working branch. Replace `working-branch` with the name of your working branch. Name your working branch something that relates to your editing task, for example, if you are resolving `issue #12345`, you might name the branch, `issue-12345`. If you are improving error messages, you might name it, `error-message-improvements`. A branch name can have alphanumeric characters, hyphens, underscores, and slashes, but it can't have any spaces or other special characters. ``` git checkout -b working-branch ``` 5. Run this command to make a copy of your working branch in your repo on `github.com` in the cloud. Remember to replace `working-branch` with the name of your working branch. ``` git push --set-upstream origin working-branch ``` Edit the documentation ---------------------- After you create a working branch, you're ready to start editing and creating topics. Last reviewed on Wed Oct 12 2022 angular Documentation contributors guide Documentation contributors guide ================================ The topics in this section describe how you can contribute to this documentation. For information about contributing code to the Angular framework, see [Contributing to Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md "Contributing to Angular | angular/angular | GitHub"). Angular is an open source project that appreciates its community support, especially when it comes to the documentation. You can update the Angular documentation in these ways: * [Make a minor change](contributors-guide-overview#make-a-minor-change "Make a minor change - Documentation contributors guide | Angular") * [Make a major change](contributors-guide-overview#make-a-major-change "Make a major change - Documentation contributors guide | Angular") > **IMPORTANT**: To submit changes to the Angular documentation, you must have: > > * A [GitHub](https://github.com "GitHub") account > * A signed [Contributor License Agreement](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#-signing-the-cla "Signing the CLA - Contributing to Angular | angular/angular | GitHub") > > Make a minor change ------------------- You can make minor changes to a documentation topic without downloading any software. Many common documentation maintenance tasks require only minor changes to a few words or characters in a topic. Examples of minor changes include: * [Correcting a typo or two](contributors-guide-overview#to-make-a-minor-change-to-a-documentation-topic "To make a minor change to a documentation topic - Documentation contributors guide | Angular") * [Reviewing a topic and updating its review date](reviewing-content#update-the-last-reviewed-date "Update the last reviewed date - Test a documentation update | Angular") * [Adding or updating search keywords](updating-search-keywords "Updating search keywords | Angular") For more about keeping the documentation up to date, see [Common documentation maintenance tasks](doc-tasks "Common documentation maintenance tasks | Angular"). To make larger changes to the documentation, you must install an Angular development environment on your local computer. You need this environment to edit and test your changes before you submit them. For information about configuring your local computer to make larger documentation updates, see [Preparing to edit the documentation](doc-prepare-to-edit "Preparing to edit documentation | Angular"). #### To make a minor change to a documentation topic Perform these steps in a browser. 1. Confirm you have a [signed Contributor License Agreement (CLA)](https://cla.developers.google.com/clas "Contributor License Agreements | Google Open Source") on file. If you don't, [sign a CLA](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#-signing-the-cla "Signing the CLA - Contributing to Angular | angular/angular | GitHub"). 2. Sign into [github.com](https://github.com "GitHub"), or if you don't have a GitHub account, [create a new GitHub account](https://github.com/join "Join GitHub | GitHub"). 3. Navigate to the page in [angular.io](https://angular.io "Angular") that you want to update. 4. On the page that you want to update, locate this pencil icon to the right of the topic's title 5. Click this icon to open the suggestion page. 6. In the suggestion page, in **Edit file**, update the content to fix the problem. If the fix requires more than correcting a few characters, it might be better to treat this as a [major change](contributors-guide-overview#make-a-major-change "Make a major change - Documentation contributors guide | Angular"). 7. Click the **Preview** tab to see how your markdown changes look when rendered. This view shows how the markdown renders. It won't look exactly like the documentation page because it doesn't display the text with the styles used in the documentation. 8. After you finish making your changes: 1. In **Propose changes**, enter a brief description of your changes that starts with `docs:` and is 100 characters or less in length. If necessary, you can add more information about the change in the larger edit window below the brief description. 2. Select **Create a new branch for this commit and start a pull request** and accept the default branch name. 3. Click **Propose changes** to open a pull request with your updated text. After you open a pull request, the Angular team reviews your change and merges it into the documentation. You can follow the progress of your pull request in the pull request's page. You might receive a notification from GitHub if the Angular team has any questions about your change. Make a major change ------------------- Making major changes or adding new topics to the documentation follows a different workflow. Major changes to a topic require that you build and test your changes before you send them to the Angular team. These topics provide information about how to set up your local computer to edit, build, and test Angular documentation to make major changes to it. * [Overview of the Angular documentation editorial workflow](doc-update-overview "Overview of Angular documentation editing | Angular") Describes how to configure your local computer to build, edit, and test Angular documentation * [Documentation style guide](docs-style-guide "Angular documentation style guide | Angular") Describes the standards used in the Angular documentation Localize Angular documentation in a new language ------------------------------------------------ Localizing Angular documentation is another way to contribute to Angular documentation. For information about localizing the Angular documentation in a new language, see [Angular localization guidelines](localizing-angular "Angular documentation style guide | Angular"). Last reviewed on Sun Dec 11 2022 angular Generating code using schematics Generating code using schematics ================================ A schematic is a template-based code generator that supports complex logic. It is a set of instructions for transforming a software project by generating or modifying code. Schematics are packaged into [collections](glossary#collection) and installed with npm. The schematic collection can be a powerful tool for creating, modifying, and maintaining any software project, but is particularly useful for customizing Angular projects to suit the particular needs of your own organization. You might use schematics, for example, to generate commonly-used UI patterns or specific components, using predefined templates or layouts. Use schematics to enforce architectural rules and conventions, making your projects consistent and interoperative. Schematics for the Angular CLI ------------------------------ Schematics are part of the Angular ecosystem. The [Angular CLI](glossary#cli) uses schematics to apply transforms to a web-app project. You can modify these schematics, and define new ones to do things like update your code to fix breaking changes in a dependency, for example, or to add a new configuration option or framework to an existing project. Schematics that are included in the `@schematics/angular` collection are run by default by the commands `ng generate` and `ng add`. The package contains named schematics that configure the options that are available to the CLI for `ng generate` sub-commands, such as `ng generate component` and `ng generate service`. The sub-commands for `ng generate` are shorthand for the corresponding schematic. To specify and generate a particular schematic, or a collection of schematics, using the long form: ``` ng generate my-schematic-collection:my-schematic-name ``` or ``` ng generate my-schematic-name --collection collection-name ``` ### Configuring CLI schematics A JSON schema associated with a schematic tells the Angular CLI what options are available to commands and sub-commands, and determines the defaults. These defaults can be overridden by providing a different value for an option on the command line. See [Workspace Configuration](workspace-config) for information about how to change the generation option defaults for your workspace. The JSON schemas for the default schematics used by the CLI to generate projects and parts of projects are collected in the package [`@schematics/angular`](https://github.com/angular/angular-cli/tree/main/packages/schematics/angular). The schema describes the options available to the CLI for each of the `ng generate` sub-commands, as shown in the `--help` output. Developing schematics for libraries ----------------------------------- As a library developer, you can create your own collections of custom schematics to integrate your library with the Angular CLI. * An *add schematic* lets developers install your library in an Angular workspace using `ng add` * *Generation schematics* can tell the `ng generate` sub-commands how to modify projects, add configurations and scripts, and scaffold artifacts that are defined in your library * An *update schematic* can tell the `ng update` command how to update your library's dependencies and adjust for breaking changes when you release a new version For more details of what these look like and how to create them, see: * [Authoring Schematics](schematics-authoring) * [Schematics for Libraries](schematics-for-libraries) ### Add schematics An *add schematic* is typically supplied with a library, so that the library can be added to an existing project with `ng add`. The `add` command uses your package manager to download new dependencies, and invokes an installation script that is implemented as a schematic. For example, the [`@angular/material`](https://material.angular.io/guide/schematics) schematic tells the `add` command to install and set up Angular Material and theming, and register new starter components that can be created with `ng generate`. Look at this one as an example and model for your own add schematic. Partner and third party libraries also support the Angular CLI with add schematics. For example, `@ng-bootstrap/schematics` adds [ng-bootstrap](https://ng-bootstrap.github.io) to an app, and `@clr/angular` installs and sets up [Clarity from VMWare](https://clarity.design/documentation/get-started). An *add schematic* can also update a project with configuration changes, add additional dependencies (such as polyfills), or scaffold package-specific initialization code. For example, the `@angular/pwa` schematic turns your application into a PWA by adding an application manifest and service worker. ### Generation schematics Generation schematics are instructions for the `ng generate` command. The documented sub-commands use the default Angular generation schematics, but you can specify a different schematic (in place of a sub-command) to generate an artifact defined in your library. Angular Material, for example, supplies generation schematics for the UI components that it defines. The following command uses one of these schematics to render an Angular Material `<mat-table>` that is pre-configured with a datasource for sorting and pagination. ``` ng generate @angular/material:table <component-name> ``` ### Update schematics The `ng update` command can be used to update your workspace's library dependencies. If you supply no options or use the help option, the command examines your workspace and suggests libraries to update. ``` ng update We analyzed your package.json, there are some packages to update: Name Version Command to update ‐------------------------------------------------------------------------------- @angular/cdk 7.2.2 -> 7.3.1 ng update @angular/cdk @angular/cli 7.2.3 -> 7.3.0 ng update @angular/cli @angular/core 7.2.2 -> 7.2.3 ng update @angular/core @angular/material 7.2.2 -> 7.3.1 ng update @angular/material rxjs 6.3.3 -> 6.4.0 ng update rxjs There might be additional packages that are outdated. Run "ng update --all" to try to update all at the same time. ``` If you pass the command a set of libraries to update (or the `--all` flag), it updates those libraries, their peer dependencies, and the peer dependencies that depend on them. > If there are inconsistencies (for example, if peer dependencies cannot be matched by a simple [semver](https://semver.io) range), the command generates an error and does not change anything in the workspace. > > We recommend that you do not force an update of all dependencies by default. Try updating specific dependencies first. > > For more about how the `ng update` command works, see [Update Command](https://github.com/angular/angular-cli/blob/main/docs/specifications/update.md). > > If you create a new version of your library that introduces potential breaking changes, you can provide an *update schematic* to enable the `ng update` command to automatically resolve any such changes in the project being updated. For example, suppose you want to update the Angular Material library. ``` ng update @angular/material ``` This command updates both `@angular/material` and its dependency `@angular/cdk` in your workspace's `package.json`. If either package contains an update schematic that covers migration from the existing version to a new version, the command runs that schematic on your workspace. Last reviewed on Mon Feb 28 2022
programming_docs
angular Overview of documentation maintenance tasks Overview of documentation maintenance tasks =========================================== The Angular documentation needs routine maintenance to keep it up-to-date. The topics in this section describe routine maintenance tasks that you can perform to help keep the Angular documentation in good condition. Documentation maintenance tasks fall into these two categories: * Minor changes * Major changes Minor changes can be made in the GitHub site without the need to load any software or tools on your system. For information about making a minor change to the documentation, see [Make a minor change](contributors-guide-overview#make-a-minor-change). Major changes require that you build and test your changes to the documentation on your local computer before you send them to the Angular documentation. For information about preparing your system to make major changes to the documentation, see [Make a major change](contributors-guide-overview#make-a-major-change). Routine documentation maintenance tasks --------------------------------------- | Task | Scope | | --- | --- | | [Review current documentation](reviewing-content) | Minor (See note) | | [Update search keywords](updating-search-keywords) | Minor | | [Resolve linter errors](docs-lint-errors) | Major | | [Resolve documentation issues](doc-select-issue) | Major | > **NOTE**: Reviewing current documentation requires a minor change if all you need to do is update the `@reviewed` date. If you find a minor problem with a documentation topic, such as a typo, fixing it during your review is also a minor change. > > If you find an issue that you don't feel comfortable fixing, [open a docs issue](https://github.com/angular/angular/issues/new?assignees=&labels=&template=3-docs-bug.yaml) in GitHub so someone else can fix it. > > Last reviewed on Wed Oct 12 2022 angular Introduction to components and templates Introduction to components and templates ======================================== A *component* controls a patch of screen called a [*view*](glossary#view "Definition of view"). It consists of a TypeScript class, an HTML template, and a CSS style sheet. The TypeScript class defines the interaction of the HTML template and the rendered DOM structure, while the style sheet describes its appearance. An Angular application uses individual components to define and control different aspects of the application. For example, an application could include components to describe: * The application root with the navigation links * The list of heroes * The hero editor In the following example, the `HeroListComponent` class includes: * A `heroes` property that holds an array of heroes. * A `selectedHero` property that holds the last hero selected by the user. * A `selectHero()` method sets a `selectedHero` property when the user clicks to choose a hero from that list. The component initializes the `heroes` property by using the `HeroService` service, which is a TypeScript [parameter property](https://www.typescriptlang.org/docs/handbook/2/classes.html#parameter-properties) on the constructor. Angular's dependency injection system provides the `HeroService` service to the component. ``` export class HeroListComponent implements OnInit { heroes: Hero[] = []; selectedHero: Hero | undefined; constructor(private service: HeroService) { } ngOnInit() { this.heroes = this.service.getHeroes(); } selectHero(hero: Hero) { this.selectedHero = hero; } } ``` Angular creates, updates, and destroys components as the user moves through the application. Your application can take action at each moment in this lifecycle through optional [lifecycle hooks](lifecycle-hooks), like `ngOnInit()`. Component metadata ------------------ The `@[Component](../api/core/component)` decorator identifies the class immediately below it as a component class, and specifies its metadata. In the example code below, you can see that `HeroListComponent` is just a class, with no special Angular notation or syntax at all. It's not a component until you mark it as one with the `@[Component](../api/core/component)` decorator. The metadata for a component tells Angular where to get the major building blocks that it needs to create and present the component and its view. In particular, it associates a *template* with the component, either directly with inline code, or by reference. Together, the component and its template describe a *view*. In addition to containing or pointing to the template, the `@[Component](../api/core/component)` metadata configures, for example, how the component can be referenced in HTML and what services it requires. Here's an example of basic metadata for `HeroListComponent`. ``` @Component({ selector: 'app-hero-list', templateUrl: './hero-list.component.html', providers: [ HeroService ] }) export class HeroListComponent implements OnInit { /* . . . */ } ``` This example shows some of the most useful `@[Component](../api/core/component)` configuration options: | Configuration options | Details | | --- | --- | | `selector` | A CSS selector that tells Angular to create and insert an instance of this component wherever it finds the corresponding tag in template HTML. For example, if an application's HTML contains `<app-hero-list></app-hero-list>`, then Angular inserts an instance of the `HeroListComponent` view between those tags. | | `templateUrl` | The module-relative address of this component's HTML template. Alternatively, you can provide the HTML template inline, as the value of the `template` property. This template defines the component's *host view*. | | `providers` | An array of [providers](glossary#provider) for services that the component requires. In the example, this tells Angular how to provide the `HeroService` instance that the component's constructor uses to get the list of heroes to display. | Templates and views ------------------- You define a component's view with its companion template. A template is a form of HTML that tells Angular how to render the component. Views are typically organized hierarchically, allowing you to modify or show and hide entire UI sections or pages as a unit. The template immediately associated with a component defines that component's *host view*. The component can also define a *view hierarchy*, which contains *embedded views*, hosted by other components. A view hierarchy can include views from components in the same NgModule and from those in different NgModules. Template syntax --------------- A template looks like regular HTML, except that it also contains Angular [template syntax](template-syntax), which alters the HTML based on your application's logic and the state of application and DOM data. Your template can use *data binding* to coordinate the application and DOM data, *pipes* to transform data before it is displayed, and *directives* to apply application logic to what gets displayed. For example, here is a template for the Tutorial's `HeroListComponent`. ``` <h2>Hero List</h2> <p><em>Select a hero from the list to see details.</em></p> <ul> <li *ngFor="let hero of heroes"> <button type="button" (click)="selectHero(hero)"> {{hero.name}} </button> </li> </ul> <app-hero-detail *ngIf="selectedHero" [hero]="selectedHero"></app-hero-detail> ``` This template uses typical HTML elements like `<h2>` and `<p>`. It also includes Angular template-syntax elements, `*[ngFor](../api/common/ngfor)`, `{{hero.name}}`, `(click)`, `[hero]`, and `<app-hero-detail>`. The template-syntax elements tell Angular how to render the HTML to the screen, using program logic and data. * The `*[ngFor](../api/common/ngfor)` directive tells Angular to iterate over a list * `{{hero.name}}`, `(click)`, and `[hero]` bind program data to and from the DOM, responding to user input. See more about [data binding](architecture-components#data-binding) below. * The `<app-hero-detail>` element tag in the example represents a new component, `HeroDetailComponent`. The `HeroDetailComponent` defines the `hero-detail` portion of the rendered DOM structure specified by the `HeroListComponent` component. Notice how these custom components mix with native HTML. ### Data binding Without a framework, you would be responsible for pushing data values into the HTML controls and turning user responses into actions and value updates. Writing such push and pull logic by hand is tedious, error-prone, and a nightmare to read, as any experienced front-end JavaScript programmer can attest. Angular supports *two-way data binding*, a mechanism for coordinating the parts of a template with the parts of a component. Add binding markup to the template HTML to tell Angular how to connect both sides. The following diagram shows the four forms of data binding markup. Each form has a direction: to the DOM, from the DOM, or both. This example from the `HeroListComponent` template uses three of these forms. ``` <app-hero-detail [hero]="selectedHero"></app-hero-detail> <button type="button" (click)="selectHero(hero)"> {{hero.name}} </button> ``` | Data bindings | Details | | --- | --- | | `[hero]` [property binding](property-binding) | Passes the value of `selectedHero` from the parent `HeroListComponent` to the `hero` property of the child `HeroDetailComponent`. | | `(click)` [event binding](user-input#binding-to-user-input-events) | Calls the component's `selectHero` method when the user clicks a hero's name. | | `{{hero.name}}` <interpolation> | Displays the component's `hero.name` property value within the `<button>` element. | Two-way data binding (used mainly in [template-driven forms](forms)) combines property and event binding in a single notation. Here's an example from the `HeroDetailComponent` template that uses two-way data binding with the `[ngModel](../api/forms/ngmodel)` directive. ``` <input type="text" id="hero-name" [(ngModel)]="hero.name"> ``` In two-way binding, a data property value flows to the input box from the component as with property binding. The user's changes also flow back to the component, resetting the property to the latest value, as with event binding. Angular processes *all* data bindings once for each JavaScript event cycle, from the root of the application component tree through all child components. Data binding plays an important role in communication between a template and its component, and is also important for communication between parent and child components. ### Pipes Angular pipes let you declare display-value transformations in your template HTML. A class with the `@[Pipe](../api/core/pipe)` decorator defines a function that transforms input values to output values for display in a view. Angular defines various pipes, such as the [date](../api/common/datepipe) pipe and [currency](../api/common/currencypipe) pipe. For a complete list, see the [Pipes API list](api?type=pipe). You can also define new pipes. To specify a value transformation in an HTML template, use the [pipe operator (`|`)](pipes). ``` {{interpolated_value | pipe_name}} ``` You can chain pipes, sending the output of one pipe function to be transformed by another pipe function. A pipe can also take arguments that control how it performs its transformation. For example, you can pass the desired format to the `[date](../api/common/datepipe)` pipe. ``` <!-- Default format: output 'Jun 15, 2015'--> <p>Today is {{today | date}}</p> <!-- fullDate format: output 'Monday, June 15, 2015'--> <p>The date is {{today | date:'fullDate'}}</p> <!-- shortTime format: output '9:43 AM'--> <p>The time is {{today | date:'shortTime'}}</p> ``` ### Directives Angular templates are *dynamic*. When Angular renders them, it transforms the DOM according to the instructions given by *directives*. A directive is a class with a `@[Directive](../api/core/directive)()` decorator. A component is technically a directive. However, components are so distinctive and central to Angular applications that Angular defines the `@[Component](../api/core/component)()` decorator, which extends the `@[Directive](../api/core/directive)()` decorator with template-oriented features. In addition to components, there are two other kinds of directives: *structural* and *attribute*. Angular defines a number of directives of both kinds, and you can define your own using the `@[Directive](../api/core/directive)()` decorator. Just as for components, the metadata for a directive associates the decorated class with a `selector` element that you use to insert it into HTML. In templates, directives typically appear within an element tag as attributes, either by name or as the target of an assignment or a binding. #### Structural directives *Structural directives* alter layout by adding, removing, and replacing elements in the DOM. The example template uses two built-in structural directives to add application logic to how the view is rendered. ``` <li *ngFor="let hero of heroes"></li> <app-hero-detail *ngIf="selectedHero"></app-hero-detail> ``` | Directives | Details | | --- | --- | | [`*ngFor`](built-in-directives#ngFor) | An *iterative*, which tells Angular to create one `<li>` per hero in the `heroes` list. | | [`*ngIf`](built-in-directives#ngIf) | A *conditional*, which includes the `HeroDetail` component only if a selected hero exists. | #### Attribute directives *Attribute directives* alter the appearance or behavior of an existing element. In templates they look like regular HTML attributes, hence the name. The `[ngModel](../api/forms/ngmodel)` directive, which implements two-way data binding, is an example of an attribute directive. `[ngModel](../api/forms/ngmodel)` modifies the behavior of an existing element (typically `<input>`) by setting its display value property and responding to change events. ``` <input type="text" id="hero-name" [(ngModel)]="hero.name"> ``` Angular includes pre-defined directives that change: * The layout structure, such as [ngSwitch](built-in-directives#ngSwitch), and * Aspects of DOM elements and components, such as [ngStyle](built-in-directives#ngstyle) and [ngClass](built-in-directives#ngClass). > Learn more in the [Attribute Directives](attribute-directives) and [Structural Directives](structural-directives) guides. > > Last reviewed on Mon Feb 28 2022 angular Building dynamic forms Building dynamic forms ====================== Many forms, such as questionnaires, can be very similar to one another in format and intent. To make it faster and easier to generate different versions of such a form, you can create a *dynamic form template* based on metadata that describes the business object model. Then, use the template to generate new forms automatically, according to changes in the data model. The technique is particularly useful when you have a type of form whose content must change frequently to meet rapidly changing business and regulatory requirements. A typical use-case is a questionnaire. You might need to get input from users in different contexts. The format and style of the forms a user sees should remain constant, while the actual questions you need to ask vary with the context. In this tutorial you will build a dynamic form that presents a basic questionnaire. You build an online application for heroes seeking employment. The agency is constantly tinkering with the application process, but by using the dynamic form you can create the new forms on the fly without changing the application code. The tutorial walks you through the following steps. 1. Enable reactive forms for a project. 2. Establish a data model to represent form controls. 3. Populate the model with sample data. 4. Develop a component to create form controls dynamically. The form you create uses input validation and styling to improve the user experience. It has a Submit button that is only enabled when all user input is valid, and flags invalid input with color coding and error messages. The basic version can evolve to support a richer variety of questions, more graceful rendering, and superior user experience. > See the . > > Prerequisites ------------- Before doing this tutorial, you should have a basic understanding to the following. * [TypeScript](https://www.typescriptlang.org/ "The TypeScript language") and HTML5 programming * Fundamental concepts of [Angular app design](architecture "Introduction to Angular app-design concepts") * Basic knowledge of [reactive forms](reactive-forms "Reactive forms guide") Enable reactive forms for your project -------------------------------------- Dynamic forms are based on reactive forms. To give the application access reactive forms directives, the [root module](bootstrapping "Learn about bootstrapping an app from the root module.") imports `[ReactiveFormsModule](../api/forms/reactiveformsmodule)` from the `@angular/forms` library. The following code from the example shows the setup in the root module. ``` import { BrowserModule } from '@angular/platform-browser'; import { ReactiveFormsModule } from '@angular/forms'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { DynamicFormComponent } from './dynamic-form.component'; import { DynamicFormQuestionComponent } from './dynamic-form-question.component'; @NgModule({ imports: [ BrowserModule, ReactiveFormsModule ], declarations: [ AppComponent, DynamicFormComponent, DynamicFormQuestionComponent ], bootstrap: [ AppComponent ] }) export class AppModule {} ``` ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule) .catch(err => console.error(err)); ``` Create a form object model -------------------------- A dynamic form requires an object model that can describe all scenarios needed by the form functionality. The example hero-application form is a set of questions —that is, each control in the form must ask a question and accept an answer. The data model for this type of form must represent a question. The example includes the `DynamicFormQuestionComponent`, which defines a question as the fundamental object in the model. The following `QuestionBase` is a base class for a set of controls that can represent the question and its answer in the form. ``` export class QuestionBase<T> { value: T|undefined; key: string; label: string; required: boolean; order: number; controlType: string; type: string; options: {key: string, value: string}[]; constructor(options: { value?: T; key?: string; label?: string; required?: boolean; order?: number; controlType?: string; type?: string; options?: {key: string, value: string}[]; } = {}) { this.value = options.value; this.key = options.key || ''; this.label = options.label || ''; this.required = !!options.required; this.order = options.order === undefined ? 1 : options.order; this.controlType = options.controlType || ''; this.type = options.type || ''; this.options = options.options || []; } } ``` ### Define control classes From this base, the example derives two new classes, `TextboxQuestion` and `DropdownQuestion`, that represent different control types. When you create the form template in the next step, you instantiate these specific question types in order to render the appropriate controls dynamically. | Control type | Details | | --- | --- | | `TextboxQuestion` control type | Presents a question and lets users enter input. ``` import { QuestionBase } from './question-base'; export class TextboxQuestion extends QuestionBase<string> { override controlType = 'textbox'; } ``` The `TextboxQuestion` control type is represented in a form template using an `<input>` element. The `type` attribute of the element is defined based on the `type` field specified in the `options` argument (for example `text`, `[email](../api/forms/emailvalidator)`, `url`). | | `DropdownQuestion` control type | Presents a list of choices in a select box. ``` import { QuestionBase } from './question-base'; export class DropdownQuestion extends QuestionBase<string> { override controlType = 'dropdown'; } ``` | ### Compose form groups A dynamic form uses a service to create grouped sets of input controls, based on the form model. The following `QuestionControlService` collects a set of `[FormGroup](../api/forms/formgroup)` instances that consume the metadata from the question model. You can specify default values and validation rules. ``` import { Injectable } from '@angular/core'; import { FormControl, FormGroup, Validators } from '@angular/forms'; import { QuestionBase } from './question-base'; @Injectable() export class QuestionControlService { toFormGroup(questions: QuestionBase<string>[] ) { const group: any = {}; questions.forEach(question => { group[question.key] = question.required ? new FormControl(question.value || '', Validators.required) : new FormControl(question.value || ''); }); return new FormGroup(group); } } ``` Compose dynamic form contents ----------------------------- The dynamic form itself is represented by a container component, which you add in a later step. Each question is represented in the form component's template by an `<app-question>` tag, which matches an instance of `DynamicFormQuestionComponent`. The `DynamicFormQuestionComponent` is responsible for rendering the details of an individual question based on values in the data-bound question object. The form relies on a [`[formGroup]` directive](../api/forms/formgroupdirective "API reference") to connect the template HTML to the underlying control objects. The `DynamicFormQuestionComponent` creates form groups and populates them with controls defined in the question model, specifying display and validation rules. ``` <div [formGroup]="form"> <label [attr.for]="question.key">{{question.label}}</label> <div [ngSwitch]="question.controlType"> <input *ngSwitchCase="'textbox'" [formControlName]="question.key" [id]="question.key" [type]="question.type"> <select [id]="question.key" *ngSwitchCase="'dropdown'" [formControlName]="question.key"> <option *ngFor="let opt of question.options" [value]="opt.key">{{opt.value}}</option> </select> </div> <div class="errorMessage" *ngIf="!isValid">{{question.label}} is required</div> </div> ``` ``` import { Component, Input } from '@angular/core'; import { FormGroup } from '@angular/forms'; import { QuestionBase } from './question-base'; @Component({ selector: 'app-question', templateUrl: './dynamic-form-question.component.html' }) export class DynamicFormQuestionComponent { @Input() question!: QuestionBase<string>; @Input() form!: FormGroup; get isValid() { return this.form.controls[this.question.key].valid; } } ``` The goal of the `DynamicFormQuestionComponent` is to present question types defined in your model. You only have two types of questions at this point but you can imagine many more. The `[ngSwitch](../api/common/ngswitch)` statement in the template determines which type of question to display. The switch uses directives with the [`formControlName`](../api/forms/formcontrolname "FormControlName directive API reference") and [`formGroup`](../api/forms/formgroupdirective "FormGroupDirective API reference") selectors. Both directives are defined in `[ReactiveFormsModule](../api/forms/reactiveformsmodule)`. ### Supply data Another service is needed to supply a specific set of questions from which to build an individual form. For this exercise you create the `QuestionService` to supply this array of questions from the hard-coded sample data. In a real-world app, the service might fetch data from a backend system. The key point, however, is that you control the hero job-application questions entirely through the objects returned from `QuestionService`. To maintain the questionnaire as requirements change, you only need to add, update, and remove objects from the `questions` array. The `QuestionService` supplies a set of questions in the form of an array bound to `@[Input](../api/core/input)()` questions. ``` import { Injectable } from '@angular/core'; import { DropdownQuestion } from './question-dropdown'; import { QuestionBase } from './question-base'; import { TextboxQuestion } from './question-textbox'; import { of } from 'rxjs'; @Injectable() export class QuestionService { // TODO: get from a remote source of question metadata getQuestions() { const questions: QuestionBase<string>[] = [ new DropdownQuestion({ key: 'brave', label: 'Bravery Rating', options: [ {key: 'solid', value: 'Solid'}, {key: 'great', value: 'Great'}, {key: 'good', value: 'Good'}, {key: 'unproven', value: 'Unproven'} ], order: 3 }), new TextboxQuestion({ key: 'firstName', label: 'First name', value: 'Bombasto', required: true, order: 1 }), new TextboxQuestion({ key: 'emailAddress', label: 'Email', type: 'email', order: 2 }) ]; return of(questions.sort((a, b) => a.order - b.order)); } } ``` Create a dynamic form template ------------------------------ The `DynamicFormComponent` component is the entry point and the main container for the form, which is represented using the `<app-dynamic-form>` in a template. The `DynamicFormComponent` component presents a list of questions by binding each one to an `<app-question>` element that matches the `DynamicFormQuestionComponent`. ``` <div> <form (ngSubmit)="onSubmit()" [formGroup]="form"> <div *ngFor="let question of questions" class="form-row"> <app-question [question]="question" [form]="form"></app-question> </div> <div class="form-row"> <button type="submit" [disabled]="!form.valid">Save</button> </div> </form> <div *ngIf="payLoad" class="form-row"> <strong>Saved the following values</strong><br>{{payLoad}} </div> </div> ``` ``` import { Component, Input, OnInit } from '@angular/core'; import { FormGroup } from '@angular/forms'; import { QuestionBase } from './question-base'; import { QuestionControlService } from './question-control.service'; @Component({ selector: 'app-dynamic-form', templateUrl: './dynamic-form.component.html', providers: [ QuestionControlService ] }) export class DynamicFormComponent implements OnInit { @Input() questions: QuestionBase<string>[] | null = []; form!: FormGroup; payLoad = ''; constructor(private qcs: QuestionControlService) {} ngOnInit() { this.form = this.qcs.toFormGroup(this.questions as QuestionBase<string>[]); } onSubmit() { this.payLoad = JSON.stringify(this.form.getRawValue()); } } ``` ### Display the form To display an instance of the dynamic form, the `AppComponent` shell template passes the `questions` array returned by the `QuestionService` to the form container component, `<app-dynamic-form>`. ``` import { Component } from '@angular/core'; import { QuestionService } from './question.service'; import { QuestionBase } from './question-base'; import { Observable } from 'rxjs'; @Component({ selector: 'app-root', template: ` <div> <h2>Job Application for Heroes</h2> <app-dynamic-form [questions]="questions$ | async"></app-dynamic-form> </div> `, providers: [QuestionService] }) export class AppComponent { questions$: Observable<QuestionBase<any>[]>; constructor(service: QuestionService) { this.questions$ = service.getQuestions(); } } ``` The example provides a model for a job application for heroes, but there are no references to any specific hero question other than the objects returned by `QuestionService`. This separation of model and data lets you repurpose the components for any type of survey, as long as it's compatible with the *question* object model. ### Ensuring valid data The form template uses dynamic data binding of metadata to render the form without making any hardcoded assumptions about specific questions. It adds both control metadata and validation criteria dynamically. To ensure valid input, the *Save* button is disabled until the form is in a valid state. When the form is valid, click *Save* and the application renders the current form values as JSON. The following figure shows the final form. Next steps ---------- | Steps | Details | | --- | --- | | Different types of forms and control collection | This tutorial shows how to build a questionnaire, which is just one kind of dynamic form. The example uses `[FormGroup](../api/forms/formgroup)` to collect a set of controls. For an example of a different type of dynamic form, see the section [Creating dynamic forms](reactive-forms#creating-dynamic-forms "Create dynamic forms with arrays") in the Reactive Forms guide. That example also shows how to use `[FormArray](../api/forms/formarray)` instead of `[FormGroup](../api/forms/formgroup)` to collect a set of controls. | | Validating user input | The section [Validating form input](reactive-forms#validating-form-input "Basic input validation") introduces the basics of how input validation works in reactive forms. The [Form validation guide](form-validation "Form validation guide") covers the topic in more depth. | Last reviewed on Mon Feb 28 2022
programming_docs
angular Optional Internationalization practices Optional Internationalization practices ======================================= The following optional topics help you manually configure the internationalization settings of your application. The optional practices are meant for advanced or custom Angular applications. Prerequisites ------------- To prepare your project for translations, you should have a basic understanding of the following subjects. * [Templates](glossary#template "template - Glossary | Angular") * [Components](glossary#component "component - Glossary | Angular") * [Angular CLI](cli "CLI Overview and Command Reference | Angular") [command-line](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") tool for managing the Angular development cycle * [Extensible Markup Language (XML)](https://www.w3.org/XML "Extensible Markup Language (XML) | W3C") used for translation files Learn about optional Angular internationalization practices ----------------------------------------------------------- [Set the runtime locale manually Learn how to change the runtime locale for your project Set the runtime locale manually](i18n-optional-manual-runtime-locale "Set the runtime locale manually") [Import global variants of the locale data Learn how to import locale data for language variants Import global variants of the locale data](i18n-optional-import-global-variants "Import global variants of the locale data") [Manage marked text with custom IDs Learn how to implement custom IDs to help you manage your marked text Manage marked text with custom IDs](i18n-optional-manage-marked-text "Manage marked text with custom IDs") Last reviewed on Thu Oct 28 2021 angular Usage of Angular libraries published to npm Usage of Angular libraries published to npm =========================================== When you build your Angular application, take advantage of sophisticated first-party libraries, as well as rich ecosystem of third-party libraries. [Angular Material](https://material.angular.io "Angular Material | Angular") is an example of a sophisticated first-party library. For links to the most popular libraries, see [Angular Resources](resources "Explore Angular Resources | Angular"). Install libraries ----------------- Libraries are published as [npm packages](npm-packages "Workspace npm dependencies | Angular"), usually together with schematics that integrate them with the Angular CLI. To integrate reusable library code into an application, you need to install the package and import the provided functionality in the location you use it. For most published Angular libraries, use the `ng add <lib_name>` Angular CLI command. The `ng add` Angular CLI command uses a package manager to install the library package and invokes schematics that are included in the package to other scaffolding within the project code. Examples of package managers include [npm](https://www.npmjs.com "npm") or [yarn](https://yarnpkg.com " Yarn"). Additional scaffolding within the project code includes import statements, fonts, and themes. A published library typically provides a `README` file or other documentation on how to add that library to your application. For an example, see the [Angular Material](https://material.angular.io "Angular Material | Angular") documentation. ### Library typings Typically, library packages include typings in `.d.ts` files; see examples in `node_modules/@angular/material`. If the package of your library does not include typings and your IDE complains, you might need to install the `@types/<lib_name>` package with the library. For example, suppose you have a library named `d3`: ``` npm install d3 --save npm install @types/d3 --save-dev ``` Types defined in a `@types/` package for a library installed into the workspace are automatically added to the TypeScript configuration for the project that uses that library. TypeScript looks for types in the `node_modules/@types` directory by default, so you do not have to add each type package individually. If a library does not have typings available at `@types/`, you may use it by manually adding typings for it. To do this: 1. Create a `typings.d.ts` file in your `src/` directory. This file is automatically included as global type definition. 2. Add the following code in `src/typings.d.ts`: ``` declare module 'host' { export interface Host { protocol?: string; hostname?: string; pathname?: string; } export function parse(url: string, queryString?: string): Host; } ``` 3. In the component or file that uses the library, add the following code: ``` import * as host from 'host'; const parsedUrl = host.parse('https://angular.io'); console.log(parsedUrl.hostname); ``` Define more typings as needed. Updating libraries ------------------ A library is able to be updated by the publisher, and also has individual dependencies which need to be kept current. To check for updates to your installed libraries, use the [`ng update`](cli/update "ng update | CLI |Angular") Angular CLI command. Use `ng update <lib_name>` Angular CLI command to update individual library versions. The Angular CLI checks the latest published release of the library, and if the latest version is newer than your installed version, downloads it and updates your `package.json` to match the latest version. When you update Angular to a new version, you need to make sure that any libraries you are using are current. If libraries have interdependencies, you might have to update them in a particular order. See the [Angular Update Guide](https://update.angular.io "Angular Update Guide | Angular") for help. Adding a library to the runtime global scope -------------------------------------------- If a legacy JavaScript library is not imported into an application, you may add it to the runtime global scope and load it as if it was added in a script tag. Configure the Angular CLI to do this at build time using the `scripts` and `styles` options of the build target in the [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file. For example, to use the [Bootstrap 4](https://getbootstrap.com/docs/4.0/getting-started/introduction "Introduction | Bootstrap") library 1. Install the library and the associated dependencies using the npm package manager: ``` npm install jquery --save npm install popper.js --save npm install bootstrap --save ``` 2. In the `angular.json` configuration file, add the associated script files to the `scripts` array: ``` "scripts": [ "node_modules/jquery/dist/jquery.slim.js", "node_modules/popper.js/dist/umd/popper.js", "node_modules/bootstrap/dist/js/bootstrap.js" ], ``` 3. Add the `bootstrap.css` CSS file to the `styles` array: ``` "styles": [ "node_modules/bootstrap/dist/css/bootstrap.css", "src/styles.css" ], ``` 4. Run or restart the `ng serve` Angular CLI command to see Bootstrap 4 work in your application. ### Using runtime-global libraries inside your app After you import a library using the "scripts" array, do **not** import it using an import statement in your TypeScript code. The following code snippet is an example import statement. ``` import * as $ from 'jquery'; ``` If you import it using import statements, you have two different copies of the library: one imported as a global library, and one imported as a module. This is especially bad for libraries with plugins, like JQuery, because each copy includes different plugins. Instead, run the `npm install @types/jquery` Angular CLI command to download typings for your library and then follow the library installation steps. This gives you access to the global variables exposed by that library. ### Defining typings for runtime-global libraries If the global library you need to use does not have global typings, you can declare them manually as `any` in `src/typings.d.ts`. For example: ``` declare var libraryName: any; ``` Some scripts extend other libraries; for instance with JQuery plugins: ``` $('.test').myPlugin(); ``` In this case, the installed `@types/jquery` does not include `myPlugin`, so you need to add an interface in `src/typings.d.ts`. For example: ``` interface JQuery { myPlugin(options?: any): any; } ``` If you do not add the interface for the script-defined extension, your IDE shows an error: ``` [TS][Error] Property 'myPlugin' does not exist on type 'JQuery' ``` Last reviewed on Wed Jan 05 2022 angular Browser support Browser support =============== Angular supports most recent browsers. This includes the following specific versions: | Browser | Supported versions | | --- | --- | | Chrome | 2 most recent versions | | Firefox | latest and extended support release (ESR) | | Edge | 2 most recent major versions | | Safari | 2 most recent major versions | | iOS | 2 most recent major versions | | Android | 2 most recent major versions | > Angular's continuous integration process runs unit tests of the framework on all of these browsers for every pull request, using [Sauce Labs](https://saucelabs.com). > > Polyfills --------- Angular is built on the latest standards of the web platform. Targeting such a wide range of browsers is challenging because they do not support all features of modern browsers. You compensate by loading polyfill scripts ("polyfills") for the browsers that you must support. See instructions on how to include polyfills into your project below. > The suggested polyfills are the ones that run full Angular applications. You might need additional polyfills to support features not covered by this list. > > > **NOTE**: Polyfills cannot magically transform an old, slow browser into a modern, fast one. > > Enabling polyfills with CLI projects ------------------------------------ The [Angular CLI](cli) provides support for polyfills. If you are not using the CLI to create your projects, see [Polyfill instructions for non-CLI users](browser-support#non-cli). The `polyfills` options of the [browser](cli/build) and [test](cli/test) builder can be a full path for a file (Example: `src/polyfills.ts`) or, relative to the current workspace or module specifier (Example: `zone.js`). If you create a TypeScript file, make sure to include it in the `files` property of your `tsconfig` file. ``` { "extends": "./tsconfig.json", "compilerOptions": { ... }, "files": [ "src/main.ts", "src/polyfills.ts" ] ... } ``` Polyfills for non-CLI users --------------------------- If you are not using the CLI, add your polyfill scripts directly to the host web page (`index.html`). For example: ``` <!-- pre-zone polyfills --> <script src="node_modules/core-js/client/shim.min.js"></script> <script> /** * you can configure some zone flags which can disable zone interception for some * asynchronous activities to improve startup performance - use these options only * if you know what you are doing as it could result in hard to trace down bugs. */ // __Zone_disable_requestAnimationFrame = true; // disable patch requestAnimationFrame // __Zone_disable_on_property = true; // disable patch onProperty such as onclick // __zone_symbol__UNPATCHED_EVENTS = ['scroll', 'mousemove']; // disable patch specified eventNames /* * in Edge developer tools, the addEventListener will also be wrapped by zone.js * with the following flag, it will bypass `zone.js` patch for Edge. */ // __Zone_enable_cross_context_check = true; </script> <!-- zone.js required by Angular --> <script src="node_modules/zone.js/bundles/zone.umd.js"></script> <!-- application polyfills --> ``` Last reviewed on Fri Nov 04 2022 angular Import global variants of the locale data Import global variants of the locale data ========================================= The [Angular CLI](cli "CLI Overview and Command Reference | Angular") automatically includes locale data if you run the [`ng build`](cli/build "ng build | CLI | Angular") command with the `--localize` option. ``` ng build --localize ``` The `@angular/common` package on npm contains the locale data files. Global variants of the locale data are available in [`@angular/common/locales/global`](https://unpkg.com/browse/@angular/common/locales/global "@angular/common/locales/global | Unpkg"). `import` example for French ---------------------------- The following example imports the global variants for French (`fr`). ``` import '@angular/common/locales/global/fr'; ``` Last reviewed on Mon Feb 28 2022 angular Understanding binding Understanding binding ===================== In an Angular template, a binding creates a live connection between a part of the UI created from a template (a DOM element, directive, or component) and the model (the component instance to which the template belongs). This connection can be used to synchronize the view with the model, to notify the model when an event or user action takes place in the view, or both. Angular's [Change Detection](change-detection) algorithm is responsible for keeping the view and the model in sync. Examples of binding include: * text interpolations * property binding * event binding * two-way binding Bindings always have two parts: a *target* which will receive the bound value, and a *template expression* which produces a value from the model. Syntax ------ Template expressions are similar to JavaScript expressions. Many JavaScript expressions are legal template expressions, with the following exceptions. You can't use JavaScript expressions that have or promote side effects, including: * Assignments (`=`, `+=`, `-=`, `...`) * Operators such as `new`, `typeof`, or `instanceof` * Chaining expressions with `;` or `,` * The increment and decrement operators `++` and `--` * Some of the ES2015+ operators Other notable differences from JavaScript syntax include: * No support for the bitwise operators such as `|` and `&` * New [template expression operators](template-expression-operators), such as `|` Expression context ------------------ Interpolated expressions have a context—a particular part of the application to which the expression belongs. Typically, this context is the component instance. In the following snippet, the expression `recommended` and the expression `itemImageUrl2` refer to properties of the `AppComponent`. ``` <h4>{{recommended}}</h4> <img alt="item 2" [src]="itemImageUrl2"> ``` An expression can also refer to properties of the *template's* context such as a [template input variable](structural-directives#shorthand) or a [template reference variable](template-reference-variables). The following example uses a template input variable of `customer`. ``` <ul> <li *ngFor="let customer of customers">{{customer.name}}</li> </ul> ``` This next example features a template reference variable, `#customerInput`. ``` <label>Type something: <input #customerInput>{{customerInput.value}} </label> ``` > Template expressions cannot refer to anything in the global namespace, except `undefined`. They can't refer to `window` or `document`. Additionally, they can't call `console.log()` or `Math.max()` and are restricted to referencing members of the expression context. > > ### Preventing name collisions The context against which an expression evaluates is the union of the template variables, the directive's context object—if it has one—and the component's members. If you reference a name that belongs to more than one of these namespaces, Angular applies the following precedence logic to determine the context: 1. The template variable name. 2. A name in the directive's context. 3. The component's member names. To avoid variables shadowing variables in another context, keep variable names unique. In the following example, the `AppComponent` template greets the `customer`, Padma. An `[ngFor](../api/common/ngfor)` then lists each `customer` in the `customers` array. ``` @Component({ template: ` <div> <!-- Hello, Padma --> <h1>Hello, {{customer}}</h1> <ul> <!-- Ebony and Chiho in a list--> <li *ngFor="let customer of customers">{{ customer.value }}</li> </ul> </div> ` }) class AppComponent { customers = [{value: 'Ebony'}, {value: 'Chiho'}]; customer = 'Padma'; } ``` The `customer` within the `[ngFor](../api/common/ngfor)` is in the context of an `[<ng-template>](../api/core/ng-template)` and so refers to the `customer` in the `customers` array, in this case Ebony and Chiho. This list does not feature Padma because `customer` outside of the `[ngFor](../api/common/ngfor)` is in a different context. Conversely, `customer` in the `<h1>` doesn't include Ebony or Chiho because the context for this `customer` is the class and the class value for `customer` is Padma. Expression best practices ------------------------- When using a template expression, follow these best practices: * **Use short expressions** Use property names or method calls whenever possible. Keep application and business logic in the component, where it is accessible to develop and test. * **Quick execution** Angular executes a template expression after every [change detection](glossary#change-detection) cycle. Many asynchronous activities trigger change detection cycles, such as promise resolutions, HTTP results, timer events, key presses, and mouse moves. An expression should finish quickly to keep the user experience as efficient as possible, especially on slower devices. Consider caching values when their computation requires greater resources. No visible side effects ----------------------- According to Angular's [unidirectional data flow model](glossary#unidirectional-data-flow), a template expression should not change any application state other than the value of the target property. Reading a component value should not change some other displayed value. The view should be stable throughout a single rendering pass. An [idempotent](https://en.wikipedia.org/wiki/Idempotence) expression is free of side effects and improves Angular's change detection performance. In Angular terms, an idempotent expression always returns *exactly the same thing* until one of its dependent values changes. Dependent values should not change during a single turn of the event loop. If an idempotent expression returns a string or a number, it returns the same string or number if you call it twice consecutively. If the expression returns an object, including an `array`, it returns the same object *reference* if you call it twice consecutively. What's next ----------- * [Property binding](property-binding) * [Event binding](event-binding) Last reviewed on Thu May 12 2022 angular Attribute directives Attribute directives ==================== Change the appearance or behavior of DOM elements and Angular components with attribute directives. > See the live example for a working example containing the code snippets in this guide. > > Building an attribute directive ------------------------------- This section walks you through creating a highlight directive that sets the background color of the host element to yellow. 1. To create a directive, use the CLI command [`ng generate directive`](cli/generate). ``` ng generate directive highlight ``` The CLI creates `src/app/highlight.directive.ts`, a corresponding test file `src/app/highlight.directive.spec.ts`, and declares the directive class in the `AppModule`. The CLI generates the default `src/app/highlight.directive.ts` as follows: ``` import { Directive } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { } ``` The `@[Directive](../api/core/directive)()` decorator's configuration property specifies the directive's CSS attribute selector, `[appHighlight]`. 2. Import `[ElementRef](../api/core/elementref)` from `@angular/core`. `[ElementRef](../api/core/elementref)` grants direct access to the host DOM element through its `nativeElement` property. 3. Add `[ElementRef](../api/core/elementref)` in the directive's `constructor()` to [inject](dependency-injection) a reference to the host DOM element, the element to which you apply `appHighlight`. 4. Add logic to the `HighlightDirective` class that sets the background to yellow. ``` import { Directive, ElementRef } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor(private el: ElementRef) { this.el.nativeElement.style.backgroundColor = 'yellow'; } } ``` > Directives *do not* support namespaces. > > > ``` > <p app:Highlight>This is invalid</p> > ``` > Applying an attribute directive ------------------------------- 1. To use the `HighlightDirective`, add a `<p>` element to the HTML template with the directive as an attribute. ``` <p appHighlight>Highlight me!</p> ``` Angular creates an instance of the `HighlightDirective` class and injects a reference to the `<p>` element into the directive's constructor, which sets the `<p>` element's background style to yellow. Handling user events -------------------- This section shows you how to detect when a user mouses into or out of the element and to respond by setting or clearing the highlight color. 1. Import `[HostListener](../api/core/hostlistener)` from '@angular/core'. ``` import { Directive, ElementRef, HostListener } from '@angular/core'; ``` 2. Add two event handlers that respond when the mouse enters or leaves, each with the `@[HostListener](../api/core/hostlistener)()` decorator. ``` @HostListener('mouseenter') onMouseEnter() { this.highlight('yellow'); } @HostListener('mouseleave') onMouseLeave() { this.highlight(''); } private highlight(color: string) { this.el.nativeElement.style.backgroundColor = color; } ``` Subscribe to events of the DOM element that hosts an attribute directive, the `<p>` in this case, with the `@[HostListener](../api/core/hostlistener)()` decorator. > The handlers delegate to a helper method, `highlight()`, that sets the color on the host DOM element, `el`. > > The complete directive is as follows: ``` @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor(private el: ElementRef) { } @HostListener('mouseenter') onMouseEnter() { this.highlight('yellow'); } @HostListener('mouseleave') onMouseLeave() { this.highlight(''); } private highlight(color: string) { this.el.nativeElement.style.backgroundColor = color; } } ``` The background color appears when the pointer hovers over the paragraph element and disappears as the pointer moves out. Passing values into an attribute directive ------------------------------------------ This section walks you through setting the highlight color while applying the `HighlightDirective`. 1. In `highlight.directive.ts`, import `[Input](../api/core/input)` from `@angular/core`. ``` import { Directive, ElementRef, HostListener, Input } from '@angular/core'; ``` 2. Add an `appHighlight` `@[Input](../api/core/input)()` property. ``` @Input() appHighlight = ''; ``` The `@[Input](../api/core/input)()` decorator adds metadata to the class that makes the directive's `appHighlight` property available for binding. 3. In `app.component.ts`, add a `color` property to the `AppComponent`. ``` export class AppComponent { color = 'yellow'; } ``` 4. To simultaneously apply the directive and the color, use property binding with the `appHighlight` directive selector, setting it equal to `color`. ``` <p [appHighlight]="color">Highlight me!</p> ``` The `[appHighlight]` attribute binding performs two tasks: * Applies the highlighting directive to the `<p>` element * Sets the directive's highlight color with a property binding ### Setting the value with user input This section guides you through adding radio buttons to bind your color choice to the `appHighlight` directive. 1. Add markup to `app.component.html` for choosing a color as follows: ``` <h1>My First Attribute Directive</h1> <h2>Pick a highlight color</h2> <div> <input type="radio" name="colors" (click)="color='lightgreen'">Green <input type="radio" name="colors" (click)="color='yellow'">Yellow <input type="radio" name="colors" (click)="color='cyan'">Cyan </div> <p [appHighlight]="color">Highlight me!</p> ``` 2. Revise the `AppComponent.color` so that it has no initial value. ``` export class AppComponent { color = ''; } ``` 3. In `highlight.directive.ts`, revise `onMouseEnter` method so that it first tries to highlight with `appHighlight` and falls back to `red` if `appHighlight` is `undefined`. ``` @HostListener('mouseenter') onMouseEnter() { this.highlight(this.appHighlight || 'red'); } ``` 4. Serve your application to verify that the user can choose the color with the radio buttons. ![Animated gif of the refactored highlight directive changing color according to the radio button the user selects](https://angular.io/generated/images/guide/attribute-directives/highlight-directive-v2-anim.gif) Binding to a second property ---------------------------- This section guides you through configuring your application so the developer can set the default color. 1. Add a second `[Input](../api/core/input)()` property to `HighlightDirective` called `defaultColor`. ``` @Input() defaultColor = ''; ``` 2. Revise the directive's `onMouseEnter` so that it first tries to highlight with the `appHighlight`, then with the `defaultColor`, and falls back to `red` if both properties are `undefined`. ``` @HostListener('mouseenter') onMouseEnter() { this.highlight(this.appHighlight || this.defaultColor || 'red'); } ``` 3. To bind to the `AppComponent.color` and fall back to "violet" as the default color, add the following HTML. In this case, the `defaultColor` binding doesn't use square brackets, `[]`, because it is static. ``` <p [appHighlight]="color" defaultColor="violet"> Highlight me too! </p> ``` As with components, you can add multiple directive property bindings to a host element. The default color is red if there is no default color binding. When the user chooses a color the selected color becomes the active highlight color. ![Animated gif of final highlight directive that shows red color with no binding and violet with the default color set. When user selects color, the selection takes precedence.](https://angular.io/generated/images/guide/attribute-directives/highlight-directive-final-anim.gif) Deactivating Angular processing with `NgNonBindable` ---------------------------------------------------- To prevent expression evaluation in the browser, add `ngNonBindable` to the host element. `ngNonBindable` deactivates interpolation, directives, and binding in templates. In the following example, the expression `{{ 1 + 1 }}` renders just as it does in your code editor, and does not display `2`. ``` <p>Use ngNonBindable to stop evaluation.</p> <p ngNonBindable>This should not evaluate: {{ 1 + 1 }}</p> ``` Applying `ngNonBindable` to an element stops binding for that element's child elements. However, `ngNonBindable` still lets directives work on the element where you apply `ngNonBindable`. In the following example, the `appHighlight` directive is still active but Angular does not evaluate the expression `{{ 1 + 1 }}`. ``` <h3>ngNonBindable with a directive</h3> <div ngNonBindable [appHighlight]="'yellow'">This should not evaluate: {{ 1 +1 }}, but will highlight yellow. </div> ``` If you apply `ngNonBindable` to a parent element, Angular disables interpolation and binding of any sort, such as property binding or event binding, for the element's children. Last reviewed on Mon Feb 28 2022
programming_docs
angular Angular change detection and runtime optimization Angular change detection and runtime optimization ================================================= **Change detection** is the process through which Angular checks to see whether your application state has changed, and if any DOM needs to be updated. At a high level, Angular walks your components from top to bottom, looking for changes. Angular runs its change detection mechanism periodically so that changes to the data model are reflected in an application’s view. Change detection can be triggered either manually or through an asynchronous event (for example, a user interaction or an XMLHttpRequest completion). Change detection is a highly optimized performant, but it can still cause slowdowns if the application runs it too frequently. In this guide, you’ll learn how to control and optimize the change detection mechanism by skipping parts of your application and running change detection only when necessary. Watch this video if you prefer to learn more about performance optimizations in a media format: Last reviewed on Wed May 04 2022 angular What is Angular? What is Angular? ================ This topic can help you understand Angular: what Angular is, what advantages it provides, and what you might expect as you start to build your applications. Angular is a development platform, built on [TypeScript](https://www.typescriptlang.org). As a platform, Angular includes: * A component-based framework for building scalable web applications * A collection of well-integrated libraries that cover a wide variety of features, including routing, forms management, client-server communication, and more * A suite of developer tools to help you develop, build, test, and update your code With Angular, you're taking advantage of a platform that can scale from single-developer projects to enterprise-level applications. Angular is designed to make updating as straightforward as possible, so take advantage of the latest developments with minimal effort. Best of all, the Angular ecosystem consists of a diverse group of over 1.7 million developers, library authors, and content creators. > See the for a working example containing the code snippets in this guide. > > Angular applications: The essentials ------------------------------------ This section explains the core ideas behind Angular. Understanding these ideas can help you design and build your applications more effectively. ### Components Components are the building blocks that compose an application. A component includes a TypeScript class with a `@[Component](../api/core/component)()` decorator, an HTML template, and styles. The `@[Component](../api/core/component)()` decorator specifies the following Angular-specific information: * A CSS selector that defines how the component is used in a template. HTML elements in your template that match this selector become instances of the component. * An HTML template that instructs Angular how to render the component * An optional set of CSS styles that define the appearance of the template's HTML elements The following is a minimal Angular component. ``` import { Component } from '@angular/core'; @Component({ selector: 'hello-world', template: ` <h2>Hello World</h2> <p>This is my first component!</p> ` }) export class HelloWorldComponent { // The code in this class drives the component's behavior. } ``` To use this component, you write the following in a template: ``` <hello-world></hello-world> ``` When Angular renders this component, the resulting DOM looks like this: ``` <hello-world> <h2>Hello World</h2> <p>This is my first component!</p> </hello-world> ``` Angular's component model offers strong encapsulation and an intuitive application structure. Components also make your application painless to unit test and can improve the general readability of your code. For more information on what to do with components, see the [Components](component-overview) section. ### Templates Every component has an HTML template that declares how that component renders. You define this template either inline or by file path. Angular adds syntax elements that extend HTML so you can insert dynamic values from your component. Angular automatically updates the rendered DOM when your component's state changes. One application of this feature is inserting dynamic text, as shown in the following example. ``` <p>{{ message }}</p> ``` The value for message comes from the component class: ``` import { Component } from '@angular/core'; @Component ({ selector: 'hello-world-interpolation', templateUrl: './hello-world-interpolation.component.html' }) export class HelloWorldInterpolationComponent { message = 'Hello, World!'; } ``` When the application loads the component and its template, the user sees the following: ``` <p>Hello, World!</p> ``` Notice the use of double curly braces—they instruct Angular to interpolate the contents within them. Angular also supports property bindings, to help you set values for properties and attributes of HTML elements and pass values to your application's presentation logic. ``` <p [id]="sayHelloId" [style.color]="fontColor"> You can set my color in the component! </p> ``` Notice the use of the square brackets—that syntax indicates that you're binding the property or attribute to a value in the component class. Declare event listeners to listen for and respond to user actions such as keystrokes, mouse movements, clicks, and touches. You declare an event listener by specifying the event name in parentheses: ``` <button type="button" [disabled]="canClick" (click)="sayMessage()"> Trigger alert message </button> ``` The preceding example calls a method, which is defined in the component class: ``` sayMessage() { alert(this.message); } ``` The following is a combined example of Interpolation, Property Binding, and Event Binding within an Angular template: ``` import { Component } from '@angular/core'; @Component ({ selector: 'hello-world-bindings', templateUrl: './hello-world-bindings.component.html' }) export class HelloWorldBindingsComponent { fontColor = 'blue'; sayHelloId = 1; canClick = false; message = 'Hello, World'; sayMessage() { alert(this.message); } } ``` ``` <button type="button" [disabled]="canClick" (click)="sayMessage()"> Trigger alert message </button> <p [id]="sayHelloId" [style.color]="fontColor"> You can set my color in the component! </p> <p>My color is {{ fontColor }}</p> ``` Add features to your templates by using [directives](built-in-directives). The most popular directives in Angular are `*[ngIf](../api/common/ngif)` and `*[ngFor](../api/common/ngfor)`. Use directives to perform a variety of tasks, such as dynamically modifying the DOM structure. And create your own custom directives to create great user experiences. The following code is an example of the `*[ngIf](../api/common/ngif)` directive. ``` import { Component } from '@angular/core'; @Component({ selector: 'hello-world-ngif', templateUrl: './hello-world-ngif.component.html' }) export class HelloWorldNgIfComponent { message = "I'm read only!"; canEdit = false; onEditClick() { this.canEdit = !this.canEdit; if (this.canEdit) { this.message = 'You can edit me!'; } else { this.message = "I'm read only!"; } } } ``` ``` <h2>Hello World: ngIf!</h2> <button type="button" (click)="onEditClick()">Make text editable!</button> <div *ngIf="canEdit; else noEdit"> <p>You can edit the following paragraph.</p> </div> <ng-template #noEdit> <p>The following paragraph is read only. Try clicking the button!</p> </ng-template> <p [contentEditable]="canEdit">{{ message }}</p> ``` Angular's declarative templates let you cleanly separate your application's logic from its presentation. Templates are based on standard HTML, for ease in building, maintaining, and updating. For more information on templates, see the [Templates](template-syntax) section. ### Dependency injection Dependency injection lets you declare the dependencies of your TypeScript classes without taking care of their instantiation. Instead, Angular handles the instantiation for you. This design pattern lets you write more testable and flexible code. Understanding dependency injection is not critical to start using Angular, but it is strongly recommended as a best practice. Many aspects of Angular take advantage of it to some degree. To illustrate how dependency injection works, consider the following example. The first file, `logger.service.ts`, defines a `Logger` class. This class contains a `writeCount` function that logs a number to the console. ``` import { Injectable } from '@angular/core'; @Injectable({providedIn: 'root'}) export class Logger { writeCount(count: number) { console.warn(count); } } ``` Next, the `hello-world-di.component.ts` file defines an Angular component. This component contains a button that uses the `writeCount` function of the Logger class. To access that function, the `Logger` service is injected into the `HelloWorldDI` class by adding `private logger: Logger` to the constructor. ``` import { Component } from '@angular/core'; import { Logger } from '../logger.service'; @Component({ selector: 'hello-world-di', templateUrl: './hello-world-di.component.html' }) export class HelloWorldDependencyInjectionComponent { count = 0; constructor(private logger: Logger) { } onLogMe() { this.logger.writeCount(this.count); this.count++; } } ``` For more information about dependency injection and Angular, see the [Dependency injection in Angular](dependency-injection) section. Angular CLI ----------- The Angular CLI is the fastest, straightforward, and recommended way to develop Angular applications. The Angular CLI makes some tasks trouble-free. For example: | Command | Details | | --- | --- | | [ng build](cli/build) | Compiles an Angular application into an output directory. | | [ng serve](cli/serve) | Builds and serves your application, rebuilding on file changes. | | [ng generate](cli/generate) | Generates or modifies files based on a schematic. | | [ng test](cli/test) | Runs unit tests on a given project. | | [ng e2e](cli/e2e) | Builds and serves an Angular application, then runs end-to-end tests. | The Angular CLI is a valuable tool for building out your applications. For more information about the Angular CLI, see the [Angular CLI Reference](cli) section. First-party libraries --------------------- The section, [Angular applications: the essentials](what-is-angular#essentials), provides a brief overview of a couple of the key architectural elements that are used when building Angular applications. The many benefits of Angular really become clear when your application grows and you want to add functions such as site navigation or user input. Use the Angular platform to incorporate one of the many first-party libraries that Angular provides. Some of the libraries available to you include: | Library | Details | | --- | --- | | [Angular Router](router) | Advanced client-side navigation and routing based on Angular components. Supports lazy-loading, nested routes, custom path matching, and more. | | [Angular Forms](forms-overview) | Uniform system for form participation and validation. | | [Angular HttpClient](http) | Robust HTTP client that can power more advanced client-server communication. | | [Angular Animations](animations) | Rich system for driving animations based on application state. | | [Angular PWA](service-worker-intro) | Tools for building Progressive Web Applications (PWA) including a service worker and Web application manifest. | | [Angular Schematics](schematics) | Automated scaffolding, refactoring, and update tools that simplify development at large scale. | These libraries expand your application's capabilities while also letting you focus more on the features that make your application unique. Add these libraries knowing that they're designed to integrate flawlessly into and update simultaneously with the Angular framework. These libraries are only required when they can help you add features to your applications or solve a particular problem. Next steps ---------- This topic gives you a brief overview of what Angular is, the advantages it provides, and what to expect as you start to build your applications. To see Angular in action, see the [Getting Started](start) tutorial. This tutorial uses [stackblitz.com](https://stackblitz.com), for you to explore a working example of Angular without any installation requirements. The following sections are recommended to explore Angular's capabilities further: * [Understanding Angular](understanding-angular-overview) * [Angular Developer Guide](developer-guide-overview) Last reviewed on Mon Feb 28 2022 angular Reusable animations Reusable animations =================== This topic provides some examples of how to create reusable animations. Prerequisites ------------- Before continuing with this topic, you should be familiar with the following: * [Introduction to Angular animations](animations) * [Transition and triggers](transition-and-triggers) Create reusable animations -------------------------- To create a reusable animation, use the [`animation()`](../api/animations/animation) function to define an animation in a separate `.ts` file and declare this animation definition as a `const` export variable. You can then import and reuse this animation in any of your application components using the [`useAnimation()`](../api/animations/useanimation) function. ``` import { animation, style, animate, trigger, transition, useAnimation } from '@angular/animations'; export const transitionAnimation = animation([ style({ height: '{{ height }}', opacity: '{{ opacity }}', backgroundColor: '{{ backgroundColor }}' }), animate('{{ time }}') ]); ``` In the preceding code snippet, `transitionAnimation` is made reusable by declaring it as an export variable. > **NOTE**: The `height`, `opacity`, `backgroundColor`, and `time` inputs are replaced during runtime. > > You can also export a part of an animation. For example, the following snippet exports the animation `[trigger](../api/animations/trigger)`. ``` import { animation, style, animate, trigger, transition, useAnimation } from '@angular/animations'; export const triggerAnimation = trigger('openClose', [ transition('open => closed', [ useAnimation(transitionAnimation, { params: { height: 0, opacity: 1, backgroundColor: 'red', time: '1s' } }) ]) ]); ``` From this point, you can import reusable animation variables in your component class. For example, the following code snippet imports the `transitionAnimation` variable and uses it via the `[useAnimation](../api/animations/useanimation)()` function. ``` import { Component } from '@angular/core'; import { transition, trigger, useAnimation } from '@angular/animations'; import { transitionAnimation } from './animations'; @Component({ selector: 'app-open-close-reusable', animations: [ trigger('openClose', [ transition('open => closed', [ useAnimation(transitionAnimation, { params: { height: 0, opacity: 1, backgroundColor: 'red', time: '1s' } }) ]) ]) ], templateUrl: 'open-close.component.html', styleUrls: ['open-close.component.css'] }) ``` More on Angular animations -------------------------- You might also be interested in the following: * [Introduction to Angular animations](animations) * [Transition and triggers](transition-and-triggers) * [Complex animation Sequences](complex-animation-sequences) * [Route transition animations](route-animations) Last reviewed on Tue Oct 11 2022 angular Update Angular to v15 Update Angular to v15 ===================== This topic provides information about updating your Angular applications to Angular version 15. For a summary of this information and the step-by-step procedure to update your Angular application to v15, see the [Angular Update Guide](https://update.angular.io). The information in the [Angular Update Guide](https://update.angular.io) and this topic is summarized from these change logs: * [angular/angular changelog](https://github.com/angular/angular/blob/main/CHANGELOG.md) * [angular/angular-cli changelog](https://github.com/angular/angular-cli/blob/main/CHANGELOG.md) * [angular/components changelog](https://github.com/angular/components/blob/main/CHANGELOG.md) Information about updating Angular applications to v14 is archived at [Update to version 14](update-to-version-14). New features in Angular v15 --------------------------- Angular v15 brings many improvements and new features. This section only contains some of the innovations in v15. For a comprehensive list of the new features, see the [Angular blog post on the update to v15](https://blog.angular.io/angular-v15-is-now-available-df7be7f2f4c8). #### Standalone components are stable The standalone components API lets you build Angular applications without the need to use NgModules. For more information about using these APIs in your next Angular application, see [Standalone components](standalone-components). #### The `[NgOptimizedImage](../api/common/ngoptimizedimage)` directive is stable Adding `[NgOptimizedImage](../api/common/ngoptimizedimage)` directive to your component or NgModule can help reduce the download time of images in your Angular application. For more information about using the `[NgOptimizedImage](../api/common/ngoptimizedimage)` directive, see [Getting started with `NgOptimizedImage`](image-directive). #### Directives can be added to host elements The directive composition API makes it possible to add directives to host elements, addressing [feature request #8785](https://github.com/angular/angular/issues/8785). Directives let you add behaviors to your components behaviors without using inheritance. #### Stack traces are more helpful Angular v15 makes debugging Angular applications easier with cleaner stack traces. Angular worked with Google Chrome developers to present stack traces that show more of your application's code and less from the libraries it calls. For more information about the Chrome DevTools and Angular's support for the cleaner stack traces, see [Modern web debugging in Chrome DevTools](https://developer.chrome.com/blog/devtools-modern-web-debugging/). #### MDC-based components are stable Many of the components in Angular Material v15 have been refactored to be based on Angular Material Design Components (MDC) for the Web. The refactored components offer improved accessibility and adherence to the Material Design spec. For more information about the updated components, see [Migrating to MDC-based Angular Material Components](https://material.angular.io/guide/mdc-migration). Breaking changes in Angular v15 ------------------------------- These are the aspects of Angular that behave differently in v15 and that might require you to review and refactor parts of your Angular application. #### Angular v15 supports node.js versions: 14.20.x, 16.13.x and 18.10.x In v15, Angular no longer supports node.js versions 14.[15-19].x or 16.[10-12].x. [PR #47730](https://github.com/angular/angular/pull/47730) #### Angular v15 supports TypeScript version 4.8 or later In v15, Angular no longer supports TypeScript versions older than 4.8. [PR #47690](https://github.com/angular/angular/pull/47690) #### `@[keyframes](../api/animations/keyframes)` name format changes In v15, `@[keyframes](../api/animations/keyframes)` names are prefixed with the component's *scope name*. [PR #42608](https://github.com/angular/angular/pull/42608) For example, in a component definition whose *scope name* is `host-my-cmp`, a `@[keyframes](../api/animations/keyframes)` rule with a name in v14 of: ``` @keyframes foo { ... } ``` becomes in v15: ``` @keyframes host-my-cmp_foo { ... } ``` This change can break any TypeScript or JavaScript code that use the names of `@[keyframes](../api/animations/keyframes)` rules. To accommodate this breaking change, you can: * Change the component's view encapsulation to `None` or `ShadowDom`. * Define `@[keyframes](../api/animations/keyframes)` rules in global stylesheets, such as `styles.css`. * Define `@[keyframes](../api/animations/keyframes)` rules in your own code. #### Invalid constructors for dependency injection can report compilation errors When a class inherits its constructor from a base class, the compiler can report an error when that constructor cannot be used for dependency injection purposes. [PR #44615](https://github.com/angular/angular/pull/44615) This can happen: * When the base class is missing an Angular decorator such as `@[Injectable](../api/core/injectable)()` or `@[Directive](../api/core/directive)()` * When the constructor contains parameters that do not have an associated token ,such as primitive types like `string`. These situations used to behave unexpectedly at runtime. For example, a class might be constructed without any of its constructor parameters. In v15, this is reported as a compilation error. New errors reported because of this change can be resolved by either: * Decorating the base class from which the constructor is inherited. * Adding an explicit constructor to the class for which the error is reported. #### `setDisabledState` is always called when a `[ControlValueAccessor](../api/forms/controlvalueaccessor)` is attached In v15, `setDisabledState` is always called when a `[ControlValueAccessor](../api/forms/controlvalueaccessor)` is attached. [PR #47576](https://github.com/angular/angular/pull/47576) You can opt out of this behavior with `FormsModule.withConfig` or `ReactiveFormsModule.withConfig`. #### The `canParse` method has been removed The `canParse` method has been removed from all translation parsers in `@angular/localize/tools`. [PR #47275](https://github.com/angular/angular/pull/47275) In v15, use `analyze` should instead and the `hint` parameter in the parse methods is mandatory. #### The `title` property is required on `[ActivatedRouteSnapshot](../api/router/activatedroutesnapshot)` In v15, the `title` property is required on [`ActivatedRouteSnapshot`](../api/router/activatedroutesnapshot). [PR #47481](https://github.com/angular/angular/pull/47481) #### `[RouterOutlet](../api/router/routeroutlet)` instantiates the component after change detection Before v15, during navigation, `[RouterOutlet](../api/router/routeroutlet)` instantiated the component being activated immediately. [PR #46554](https://github.com/angular/angular/pull/46554) In v15, the component is not instantiated until after change detection runs. This change could affect tests that do not trigger change detection after a router navigation. This can also affect production code that relies on the exact timing of component availability, for example, if your component's constructor calls `router.getCurrentNavigation()`. #### `relativeLinkResolution` is not configurable in the Router In v15, `relativeLinkResolution` is not configurable in the Router. [PR #47623](https://github.com/angular/angular/pull/47623) In previous versions, this option was used to opt out of a bug fix. #### Angular compiler option `enableIvy` has been removed The Angular compiler option `enableIvy` has been removed because Ivy is Angular's only rendering engine. [PR #47346](https://github.com/angular/angular/pull/47346) #### Angular Material components based on MDC In Angular Material v15, many components have been refactored to be based on the official Material Design Components for Web (MDC). For information about breaking changes in Material components v15, see [Migrating to MDC-based Angular Material Components](https://material.angular.io/guide/mdc-migration). #### Hardening attribute and property binding rules for `<iframe>` elements Existing `<iframe>` instances might have security-sensitive attributes applied to them as an attribute or property binding. These security-sensitive attributes can occur in a template or in a directive's host bindings. Such occurrences require an update to ensure compliance with the new and stricter rules about `<iframe>` bindings. For more information, see [the error page](https://angular.io/errors/NG0910). Deprecations in Angular v15 --------------------------- These are the aspects of Angular that are being phased out. They are still available in v15, but they can be removed in future versions as Angular's [deprecation practices](releases#deprecation-practices) describe. To maintain the reliability of your Angular application, review these notes and update your application as soon as practicable. | Removed | Replacement | Details | | --- | --- | --- | | [`DATE_PIPE_DEFAULT_TIMEZONE`](../api/common/date_pipe_default_timezone) | [`DATE_PIPE_DEFAULT_OPTIONS`](../api/common/date_pipe_default_options) | The `timezone` field in `[DATE\_PIPE\_DEFAULT\_OPTIONS](../api/common/date_pipe_default_options)` defines the time zone.[PR #43611](https://github.com/angular/angular/pull/43611) | | [`Injector.get()`](../api/core/injector#get) with the `[InjectFlags](../api/core/injectflags)` parameter | [`Injector.get()`](../api/core/injector#get) with the `[InjectOptions](../api/core/injectoptions)` object | [PR #41592](https://github.com/angular/angular/pull/41592) | | [`TestBed.inject()`](../api/core/testing/testbed#inject) with the `[InjectFlags](../api/core/injectflags)` parameter | [`TestBed.inject()`](../api/core/testing/testbed#inject) with the `[InjectOptions](../api/core/injectoptions)` object. | [PR #46761](https://github.com/angular/angular/pull/46761) | | `providedIn: [NgModule](../api/core/ngmodule)` for [`@Injectable`](../api/core/injectable) and [`InjectionToken`](../api/core/injectiontoken)`providedIn: 'any'` for an `@[Injectable](../api/core/injectable)` or `[InjectionToken](../api/core/injectiontoken)` | See Details | `providedIn: [NgModule](../api/core/ngmodule)` was intended to be a tree-shakable alternative to `[NgModule](../api/core/ngmodule)` providers. It does not have wide usage and is often used incorrectly in cases where `providedIn: 'root'` would be preferred. If providers must be scoped to a specific [`NgModule`](../api/core/ngmodule), use `[NgModule.providers](../api/core/ngmodule#providers)` instead. [PR #47616](https://github.com/angular/angular/pull/47616) | | [`RouterLinkWithHref`](../api/router/routerlinkwithhref) directive | [`RouterLink`](../api/router/routerlink) directive | The `[RouterLink](../api/router/routerlink)` directive contains the code from the `[RouterLinkWithHref](../api/router/routerlinkwithhref)` directive to handle elements with `href` attributes. [PR #47630](https://github.com/angular/angular/pull/47630), [PR #47599](https://github.com/angular/angular/pull/47599) | For information about deprecations in Material components v15, see [Migrating to MDC-based Angular Material Components](https://material.angular.io/guide/mdc-migration). Last reviewed on Tue Nov 15 2022
programming_docs
angular Make and save changes to a documentation topic Make and save changes to a documentation topic ============================================== This topic describes tasks that you perform while making changes to the documentation. > **IMPORTANT**: Only perform these tasks after you have created a working branch in which to work as described in [Create a working branch for editing](doc-update-start#create-a-working-branch-for-editing). > > Work in the correct working branch ---------------------------------- Before you change any files, make sure that you are working in the correct working branch. #### To set the correct working branch for editing Perform these steps from a command-line tool on your local computer. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to check out your working branch. Replace `working-branch` with the name of the branch that you created for the documentation issue. ``` git checkout working-branch ``` Edit the documentation ---------------------- Review the [Angular documentation style guide](styleguide) before you start editing to understand how to write and format the text in the documentation. In your working branch, edit the files that need to be changed. Most documentation source files are found in the `aio/content/guide` directory of the `angular` repo. Angular development tools can render the documentation as you make your changes. #### To view the rendered documentation while you are editing Perform these steps from a command-line tool on your local computer or in the **terminal** pane of your IDE. 1. Navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). 2. From your working directory, run this command to navigate to the `aio` directory. The `aio` directory contains Angular's documentation files and tools. ``` cd aio ``` 3. Run this command to build the documentation locally. ``` yarn build ``` This builds the documentation from scratch, but does not serve it. 4. Run this command to serve and sync the documentation. ``` yarn start ``` This serves your draft of the angular.io website locally at `http://localhost:4200` and watches for changes to documentation files. Each time you save an update to a documentation file, the angular.io website at `http://localhost:4200` is updated. You might need to refresh your browser to see the changes after you save them. ### Documentation linting If you installed Vale on your local computer and your IDE, each time you save a markdown file, Vale reviews it for common errors. Vale, the documentation linter, reports the errors it finds in the **Problems** pane of Visual Studio Code. The errors are also reflected in the documentation source code, as close to the problem as possible. For more information about documentation linting and resolving lint problems, see [Resolve documentation linter messages](docs-lint-errors). Save your changes ----------------- As you make changes to the documentation source files on your local computer, your changes can be in one of these states. * **Made, but not saved** This is the state of your changes as you edit a file in your integrated development environment (IDE). * **Saved, but not committed** After you save changes to a file from the IDE, they are saved to your local computer. While the changes have been saved, they have not been recorded as a change by `git`, the version control software. Your files are typically in this state as you review your work in progress. * **Committed, but not pushed** After you commit your changes to `git`, your changes are recorded as a *commit* on your local computer, but they are not saved in the cloud. This is the state of your files when you've made some progress and you want to save that progress as a commit. * **Committed and pushed** After you push your commits to your personal repo in `github.com`, your changes are recorded by `git` and saved to the cloud. They are not yet part of the `angular/angular` repo. This is the state your files must be in before you can open a pull request for them to become part of the `angular/angular` repo. * **Merged into Angular** After your pull request is approved and merged, the changes you made are now part of the `angular/angular` repo and appear in the [angular.io](https://angular.io) web site. Your documentation update is complete. This section describes how to save the changes you make to files in your working directory. If you are new to using `git` and GitHub, review this section carefully to understand how to save your changes as you make them. ### Save your changes to your local computer How to save changes that you make to a file on your local computer is determined by your IDE. Refer to your IDE for the specific procedure of saving changes. This process makes your changes *saved, but not committed*. ### Review your rendered topics After you save changes to a documentation topic, and before you commit those changes on your local computer, review the rendered topic in a browser. #### To render your changes in a browser on your local computer Perform these steps from a command-line tool on your local computer or in the **terminal** pane of your IDE. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to the `aio` directory in your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular/aio ``` 2. Run this command to build the documentation using the files on your local computer. ``` yarn build ``` This command builds the documentation from scratch, but does not serve it for viewing. 3. Run this command to serve the documentation locally and rebuild it after it changes. ``` yarn start ``` This command serves the Angular documentation at [`http://localhost:4200`](http://localhost:4200). You might need to refresh the browser after the documentation is updated to see the changes in your browser. After you are satisfied with the changes, commit them on your local computer. ### Commit your changes on your local computer Perform this procedure after you save the changes on your local computer and you are ready to commit changes on your local computer. #### To commit your changes on your local computer Perform these steps from a command-line tool on your local computer or in the **terminal** pane of your IDE. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to the `aio` directory in your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular/aio ``` 2. Run this command to confirm that you are ready to commit your changes. ``` git status ``` The `git status` command returns an output like this. ``` On branch working-branch Your branch is up to date with 'origin/working-branch Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: content/guide/doc-build-test.md modified: content/guide/doc-edit-finish.md modified: content/guide/doc-editing.md modified: content/guide/doc-pr-prep.md modified: content/guide/doc-pr-update.md modified: content/guide/doc-prepare-to-edit.md modified: content/guide/doc-select-issue.md modified: content/guide/doc-update-start.md no changes added to commit (use "git add" and/or "git commit -a") ``` 1. Confirm that you in the correct working branch. If you are not in the correct branch, replace `working-branch` with the name of your working branch and then run `git checkout working-branch` to select the correct branch. 2. Review the modified files in the list. Confirm that they are those that you have changed and saved, but not committed. The list of modified files varies, depending on what you have edited. 3. Run this command to add a file that you want to commit. Replace `filename` with a filename from the `git status` output. ``` git add filename ``` You can add multiple files in a single command by using wildcard characters in the filename parameter. You can also run this command to add all changed files that are being tracked by `git` to the commit by using `*` filename as this example shows. ``` git add * ``` > **IMPORTANT**: Files that are not tracked by `git` are not committed or pushed to your repo on `github.com` and they do not appear in your pull request. > > 4. Run `git status` again. ``` git status ``` 5. Review the output and confirm the files that are ready to be committed. ``` On branch working-branch Your branch is up to date with 'origin/working-branch'. Changes to be committed: (use "git restore --staged <file>..." to unstage) modified: content/guide/doc-build-test.md modified: content/guide/doc-edit-finish.md modified: content/guide/doc-editing.md modified: content/guide/doc-pr-prep.md modified: content/guide/doc-pr-update.md modified: content/guide/doc-prepare-to-edit.md modified: content/guide/doc-select-issue.md modified: content/guide/doc-update-start.md ``` 6. Run this command to commit the changed files to your local computer. The commit message that follows the `-m` parameter must start with `docs:` followed by space, and your message. Replace `detailed commit message` with a message that describes the changes you made. ``` git commit -m 'docs: detailed commit message' ``` For more information about Angular commit messages, see [Formatting commit messages for a pull request](doc-pr-prep#format-commit-messages-for-a-pull-request). Your changes to the documentation are now *committed, but not pushed*. ### Push your commits to the cloud After you commit the changes to your local computer, this procedure pushes those commits to your `origin` repo in the cloud. #### To push your changes to your origin repo in the cloud Perform these steps from a command-line tool on your local computer or in the **terminal** pane of your IDE. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to the `aio` directory in your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular/aio ``` 2. Run this command to confirm that you are using the correct branch. ``` git status ``` If you aren't in the correct branch, replace `working-branch` with the name of your working branch and run `git checkout working-branch` to select the correct branch. Git status also shows whether you have changes on your local computer that have not been pushed to the cloud. ``` On branch working-branch Your branch is ahead of 'origin/working-branch' by 1 commit. (use "git push" to publish your local commits) ``` This example output says that there is one commit on the local computer that's not in the `working-branch` branch on the `origin` repo. The `origin` is the `personal/angular` repo in GitHub. The next command pushes that commit to the `origin` repo. 3. Run this command to push the commits on your local computer to your account on GitHub in the cloud. ``` git push ``` If this is the first time you've pushed commits from the branch, you can see a message such as this. ``` fatal: The current branch working-branch has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin working-branch To have this happen automatically for branches without a tracking upstream, see 'push.autoSetupRemote' in 'git help config'. ``` If you get this message, copy the command that the message provides and run it as shown here: ``` git push --set-upstream origin working-branch ``` The changes that you made in the `working-branch` branch on your local computer have been saved on `github.com`. Your changes to the documentation are now *committed and pushed*. Test your documentation ----------------------- After you update the documentation to fix the issue that you picked, you are ready to test the documentation. Testing documentation consists of: * **Documentation linting** Each time you open and save a documentation topic, the documentation linter checks for common errors. For more information about documentation linting, see [Resolving documentation linter messages](docs-lint-errors). * **Manual review** When your documentation update is complete, have another person review your changes. If you have updated technical content, have a subject matter expert on the topic review your update, as well. * **Automated testing** The Angular documentation is tested automatically after you open a pull request. It must pass this testing before the pull request can be merged. For more information about automated documentation testing, see [Testing a documentation update](doc-build-test). Last reviewed on Wed Oct 12 2022 angular Add the localize package Add the localize package ======================== To take advantage of the localization features of Angular, use the [Angular CLI](cli "CLI Overview and Command Reference | Angular") to add the `@angular/localize` package to your project. To add the `@angular/localize` package, use the following command to update the `package.json` and `polyfills.ts` files in your project. ``` ng add @angular/localize ``` > For more information about `package.json` and `polyfill.ts` files, see [Workspace npm dependencies](npm-packages "Workspace npm dependencies | Angular"). > > If `@angular/localize` is not installed and you try to build a localized version of your project, the [Angular CLI](cli "CLI Overview and Command Reference | Angular") generates an error. What's next ----------- * [Refer to locales by ID](i18n-common-locale-id "Refer to locales by ID | Angular") Last reviewed on Thu Oct 07 2021 angular Work with translation files Work with translation files =========================== After you prepare a component for translation, use the [`extract-i18n`](cli/extract-i18n "ng extract-i18n | CLI | Angular") [Angular CLI](cli "CLI Overview and Command Reference | Angular") command to extract the marked text in the component into a *source language* file. The marked text includes text marked with `i18n`, attributes marked with `i18n-`*attribute*, and text tagged with `[$localize](../api/localize/init/%24localize)` as described in [Prepare component for translation](i18n-common-prepare "Prepare component for translation | Angular"). Complete the following steps to create and update translation files for your project. 1. [Extract the source language file](i18n-common-translation-files#extract-the-source-language-file "Extract the source language file - Work with translation files | Angular"). 1. Optionally, change the location, format, and name. 2. Copy the source language file to [create a translation file for each language](i18n-common-translation-files#create-a-translation-file-for-each-language "Create a translation file for each language - Work with translation files | Angular"). 3. [Translate each translation file](i18n-common-translation-files#translate-each-translation-file "Translate each translation file - Work with translation files | Angular"). 4. Translate plurals and alternate expressions separately. 1. [Translate plurals](i18n-common-translation-files#translate-plurals "Translate plurals - Work with translation files | Angular"). 2. [Translate alternate expressions](i18n-common-translation-files#translate-alternate-expressions "Translate alternate expressions - Work with translation files | Angular"). 3. [Translate nested expressions](i18n-common-translation-files#translate-nested-expressions "Translate nested expressions - Work with translation files | Angular"). Extract the source language file -------------------------------- To extract the source language file, complete the following actions. 1. Open a terminal window. 2. Change to the root directory of your project. 3. Run the following CLI command. ``` ng extract-i18n ``` The `extract-i18n` command creates a source language file named `messages.xlf` in the root directory of your project. For more information about the XML Localization Interchange File Format (XLIFF, version 1.2), see [XLIFF](https://en.wikipedia.org/wiki/XLIFF "XLIFF | Wikipedia"). Use the following [`extract-i18n`](cli/extract-i18n "ng extract-i18n | CLI | Angular") command options to change the source language file location, format, and file name. | Command option | Details | | --- | --- | | `--format` | Set the format of the output file | | `--out-file` | Set the name of the output file | | `--output-path` | Set the path of the output directory | ### Change the source language file location To create a file in the `src/locale` directory, specify the output path as an option. #### `extract-i18n --output-path` example The following example specifies the output path as an option. ``` ng extract-i18n --output-path src/locale ``` ### Change the source language file format The `extract-i18n` command creates files in the following translation formats. | Translation format | Details | File extension | | --- | --- | --- | | ARB | [Application Resource Bundle](https://github.com/google/app-resource-bundle/wiki/ApplicationResourceBundleSpecification "ApplicationResourceBundleSpecification | google/app-resource-bundle | GitHub") | `.arb` | | JSON | [JavaScript Object Notation](https://www.json.org "Introducing JSON | JSON") | `.json` | | XLIFF 1.2 | [XML Localization Interchange File Format, version 1.2](http://docs.oasis-open.org/xliff/xliff-core/xliff-core.html "XLIFF Version 1.2 Specification | Oasis Open Docs") | `.xlf` | | XLIFF 2 | [XML Localization Interchange File Format, version 2](http://docs.oasis-open.org/xliff/xliff-core/v2.0/cos01/xliff-core-v2.0-cos01.html "XLIFF Version 2.0 | Oasis Open Docs") | `.xlf` | | XMB | [XML Message Bundle](http://cldr.unicode.org/development/development-process/design-proposals/xmb "XMB | CLDR - Unicode Common Locale Data Repository | Unicode") | `.xmb` (`.xtb`) | Specify the translation format explicitly with the `--format` command option. > The XMB format generates `.xmb` source language files, but uses`.xtb` translation files. > > #### `extract-i18n --format` example The following example demonstrates several translation formats. ``` ng extract-i18n --format=xlf ng extract-i18n --format=xlf2 ng extract-i18n --format=xmb ng extract-i18n --format=json ng extract-i18n --format=arb ``` ### Change the source language file name To change the name of the source language file generated by the extraction tool, use the `--out-file` command option. #### `extract-i18n --out-file` example The following example demonstrates naming the output file. ``` ng extract-i18n --out-file source.xlf ``` Create a translation file for each language ------------------------------------------- To create a translation file for a locale or language, complete the following actions. 1. [Extract the source language file](i18n-common-translation-files#extract-the-source-language-file "Extract the source language file - Work with translation files | Angular"). 2. Make a copy of the source language file to create a *translation* file for each language. 3. Rename the *translation* file to add the locale. ``` messages.xlf --> message.{locale}.xlf ``` 4. Create a new directory at your project root named `locale`. ``` src/locale ``` 5. Move the *translation* file to the new directory. 6. Send the *translation* file to your translator. 7. Repeat the above steps for each language you want to add to your application. ### `extract-i18n` example for French For example, to create a French translation file, complete the following actions. 1. Run the `extract-i18n` command. 2. Make a copy of the `messages.xlf` source language file. 3. Rename the copy to `messages.fr.xlf` for the French language (`fr`) translation. 4. Move the `fr` translation file to the `src/locale` directory. 5. Send the `fr` translation file to the translator. Translate each translation file ------------------------------- Unless you are fluent in the language and have the time to edit translations, you will likely complete the following steps. 1. Send each translation file to a translator. 2. The translator uses an XLIFF file editor to complete the following actions. 1. Create the translation. 2. Edit the translation. ### Translation process example for French To demonstrate the process, review the `messages.fr.xlf` file in the [Example Angular Internationalization application](i18n-example "Example Angular Internationalization application | Angular"). The [Example Angular Internationalization application](i18n-example "Example Angular Internationalization application | Angular") includes a French translation for you to edit without a special XLIFF editor or knowledge of French. The following actions describe the translation process for French. 1. Open `messages.fr.xlf` and find the first `<trans-unit>` element. This is a *translation unit*, also known as a *text node*, that represents the translation of the `<h1>` greeting tag that was previously marked with the `i18n` attribute. ``` <trans-unit id="introductionHeader" datatype="html"> <source>Hello i18n!</source> <note priority="1" from="description">An introduction header for this sample</note> <note priority="1" from="meaning">User welcome</note> </trans-unit> ``` The `id="introductionHeader"` is a [custom ID](i18n-optional-manage-marked-text "Manage marked text with custom IDs | Angular"), but without the `@@` prefix required in the source HTML. 2. Duplicate the `<source>... </source>` element in the text node, rename it to `target`, and then replace the content with the French text. ``` <trans-unit id="introductionHeader" datatype="html"> <source>Hello i18n!</source> <target>Bonjour i18n !</target> <note priority="1" from="description">An introduction header for this sample</note> <note priority="1" from="meaning">User welcome</note> </trans-unit> ``` In a more complex translation, the information and context in the [description and meaning elements](i18n-common-prepare#add-helpful-descriptions-and-meanings "Add helpful descriptions and meanings - Prepare component for translation | Angular") help you choose the right words for translation. 3. Translate the other text nodes. The following example displays the way to translate. ``` <trans-unit id="ba0cc104d3d69bf669f97b8d96a4c5d8d9559aa3" datatype="html"> <source>I don&apos;t output any element</source> <target>Je n'affiche aucun élément</target> </trans-unit> <trans-unit id="701174153757adf13e7c24a248c8a873ac9f5193" datatype="html"> <source>Angular logo</source> <target>Logo d'Angular</target> </trans-unit> ``` > Don't change the IDs for translation units. Each `id` attribute is generated by Angular and depends on the content of the component text and the assigned meaning. If you change either the text or the meaning, then the `id` attribute changes. For more about managing text updates and IDs, see [custom IDs](i18n-optional-manage-marked-text "Manage marked text with custom IDs | Angular"). > > Translate plurals ----------------- Add or remove plural cases as needed for each language. > For language plural rules, see [CLDR plural rules](https://unicode-org.github.io/cldr-staging/charts/latest/supplemental/language_plural_rules.html "Language Plural Rules - CLDR Charts | Unicode | GitHub"). > > ### `minute` `plural` example To translate a `plural`, translate the ICU format match values. * `just now` * `one minute ago` * `<x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes ago` The following example displays the way to translate. ``` <trans-unit id="5a134dee893586d02bffc9611056b9cadf9abfad" datatype="html"> <source>{VAR_PLURAL, plural, =0 {just now} =1 {one minute ago} other {<x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes ago} }</source> <target>{VAR_PLURAL, plural, =0 {à l'instant} =1 {il y a une minute} other {il y a <x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes} }</target> </trans-unit> ``` Translate alternate expressions ------------------------------- Angular also extracts alternate `select` ICU expressions as separate translation units. ### `gender` `select` example The following example displays a `select` ICU expression in the component template. ``` <span i18n>The author is {gender, select, male {male} female {female} other {other}}</span> ``` In this example, Angular extracts the expression into two translation units. The first contains the text outside of the `select` clause, and uses a placeholder for `select` (`<x id="ICU">`): ``` <trans-unit id="f99f34ac9bd4606345071bd813858dec29f3b7d1" datatype="html"> <source>The author is <x id="ICU" equiv-text="{gender, select, male {...} female {...} other {...}}"/></source> <target>L'auteur est <x id="ICU" equiv-text="{gender, select, male {...} female {...} other {...}}"/></target> </trans-unit> ``` > When you translate the text, move the placeholder if necessary, but don't remove it. If you remove the placeholder, the ICU expression is removed from your translated application. > > The following example displays the second translation unit that contains the `select` clause. ``` <trans-unit id="eff74b75ab7364b6fa888f1cbfae901aaaf02295" datatype="html"> <source>{VAR_SELECT, select, male {male} female {female} other {other} }</source> <target>{VAR_SELECT, select, male {un homme} female {une femme} other {autre} }</target> </trans-unit> ``` The following example displays both translation units after translation is complete. ``` <trans-unit id="f99f34ac9bd4606345071bd813858dec29f3b7d1" datatype="html"> <source>The author is <x id="ICU" equiv-text="{gender, select, male {...} female {...} other {...}}"/></source> <target>L'auteur est <x id="ICU" equiv-text="{gender, select, male {...} female {...} other {...}}"/></target> </trans-unit> <trans-unit id="eff74b75ab7364b6fa888f1cbfae901aaaf02295" datatype="html"> <source>{VAR_SELECT, select, male {male} female {female} other {other} }</source> <target>{VAR_SELECT, select, male {un homme} female {une femme} other {autre} }</target> </trans-unit> ``` Translate nested expressions ---------------------------- Angular treats a nested expression in the same manner as an alternate expression. Angular extracts the expression into two translation units. ### Nested `plural` example The following example displays the first translation unit that contains the text outside of the nested expression. ``` <trans-unit id="972cb0cf3e442f7b1c00d7dab168ac08d6bdf20c" datatype="html"> <source>Updated: <x id="ICU" equiv-text="{minutes, plural, =0 {...} =1 {...} other {...}}"/></source> <target>Mis à jour: <x id="ICU" equiv-text="{minutes, plural, =0 {...} =1 {...} other {...}}"/></target> </trans-unit> ``` The following example displays the second translation unit that contains the complete nested expression. ``` <trans-unit id="7151c2e67748b726f0864fc443861d45df21d706" datatype="html"> <source>{VAR_PLURAL, plural, =0 {just now} =1 {one minute ago} other {<x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes ago by {VAR_SELECT, select, male {male} female {female} other {other} }} }</source> <target>{VAR_PLURAL, plural, =0 {à l'instant} =1 {il y a une minute} other {il y a <x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes par {VAR_SELECT, select, male {un homme} female {une femme} other {autre} }} }</target> </trans-unit> ``` The following example displays both translation units after translating. ``` <trans-unit id="972cb0cf3e442f7b1c00d7dab168ac08d6bdf20c" datatype="html"> <source>Updated: <x id="ICU" equiv-text="{minutes, plural, =0 {...} =1 {...} other {...}}"/></source> <target>Mis à jour: <x id="ICU" equiv-text="{minutes, plural, =0 {...} =1 {...} other {...}}"/></target> </trans-unit> <trans-unit id="7151c2e67748b726f0864fc443861d45df21d706" datatype="html"> <source>{VAR_PLURAL, plural, =0 {just now} =1 {one minute ago} other {<x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes ago by {VAR_SELECT, select, male {male} female {female} other {other} }} }</source> <target>{VAR_PLURAL, plural, =0 {à l'instant} =1 {il y a une minute} other {il y a <x id="INTERPOLATION" equiv-text="{{minutes}}"/> minutes par {VAR_SELECT, select, male {un homme} female {une femme} other {autre} }} }</target> </trans-unit> ``` What's next ----------- * [Merge translations into the app](i18n-common-merge "Merge translations into the application | Angular") Last reviewed on Mon Feb 28 2022
programming_docs
angular Overview of Angular documentation editorial workflow Overview of Angular documentation editorial workflow ==================================================== This section describes the process of making major changes to the Angular documentation. It also describes how Angular documentation is stored, built, revised, and tested. The following diagram illustrates the workflow for revising Angular documentation. The steps are summarized below and described in the topics of this section. Prepare to edit the docs ------------------------ You perform this step one time to prepare your local computer to update the Angular documentation. For more information about how to prepare to edit the docs, see [Preparing to edit documentation](doc-prepare-to-edit). Select a documentation issue ---------------------------- The first step in resolving a documentation issue is to select one to fix. The issue that you fix can be one from the [list of documentation issues](https://github.com/angular/angular/issues?q=is%3Aissue+is%3Aopen+label%3A%22comp%3A+docs%22) in the `angular/angular` repo or one you create. For more information about how to select an issue to fix, see [Selecting a documentation issue](doc-select-issue). ### Create a documentation issue If you want to fix a problem that has not already been described in an issue, [open a documentation issue](https://github.com/angular/angular/issues/new?assignees=&labels=&template=3-docs-bug.yaml) before you start. When you can relate an issue to your pull request, reviewers can understand the problem better when they review your pull request. ### Create a working branch After you select an issue to resolve, create a `working` branch in the `working` directory on your local computer. You need to make your changes in this branch to save and test them while you edit. After you fix the issue, you use this branch when you open the pull request for your solution to be merged into `angular/angular`. For more information about how to create a `working` branch, see [Starting to edit a documentation topic](doc-update-start). Revise topics ------------- In your `working` branch, you edit and create the documentation topics necessary to resolve the issue. You perform most of this work in your integrated development environment (IDE). For more information about how to revise a documentation topic, see [Revising a documentation topic](doc-editing). ### Resolve lint errors Each time you save your edits to a documentation topic, the documentation linter reviews your topic. It reports the problems it finds in your topic to your IDE. To prevent delays later in the pull request process, you should correct these problems as they are reported. The documentation linter errors must be corrected before you open the pull request to pass the pull request review. Having lint errors in a topic can prevent the pull request from being approved for merging. For more information about how to resolve lint problems in a documentation topic, see [Resolving documentation linter messages](docs-lint-errors). ### Test your changes As you edit documentation topics to resolve the issue you selected, you want to build a local version of the updated documentation. This is the easiest way to review your changes in the same context as the documentation's users. You can also run some of the automated tests on your local computer to catch other errors. Running these tests on your local computer before you open a pull request speeds up the pull-request approval process. For more information about how to build and test your changes before you open a pull request, see [Building and testing documentation](doc-build-test). Prepare your pull request ------------------------- To make your documentation changes ready to be added to the `angular/angular` repo, there are a few things to do before you open a pull request. For example, to make your pull request easy to review and approve, the commits and commit messages in your `working` branch must be formatted correctly. For information about how to prepare your branch for a pull request, see [Preparing documentation for a pull request](doc-pr-prep). ### Open your pull request Opening a documentation pull request sends your changes to the Angular reviewers who are familiar with the topic. To be processed correctly, pull requests for `angular/angular` must be formatted correctly and contain specific information. For information about how to format a pull request for your documentation update, see [Opening a documentation pull request](doc-pr-open). ### Update your pull request You might get feedback about your pull request that requires you to revise the topic. Because the pull-request process is designed for all Angular code, as well as the documentation, this process might seem intimidating the first time. For information about how to update your topics and respond to feedback on your changes, see [Updating a documentation pull request in progress](doc-pr-update). Clean up after merge -------------------- After your pull request is approved and merged into `angular/angular`, it becomes part of the official Angular documentation. At that point, your changes are now in the `main` branch of `angular/angular`. This means that you can safely delete your `working` branch. It is generally a good practice to delete `working` branches after their changes are merged into the `main` branch of `angular/angular`. This prevents your personal fork from collecting lots of branches that could be confusing in the future. For information about how to clean up safely after your pull request is merged, see [Finishing up a documentation pull request](doc-edit-finish). Last reviewed on Wed Oct 12 2022 angular Router reference Router reference ================ The following sections highlight some core router concepts. Router imports -------------- The Angular Router is an optional service that presents a particular component view for a given URL. It isn't part of the Angular core and thus is in its own library package, `@angular/router`. Import what you need from it as you would from any other Angular package. ``` import { RouterModule, Routes } from '@angular/router'; ``` > For more on browser URL styles, see [`LocationStrategy` and browser URL styles](router#browser-url-styles). > > Configuration ------------- A routed Angular application has one singleton instance of the `[Router](../api/router/router)` service. When the browser's URL changes, that router looks for a corresponding `[Route](../api/router/route)` from which it can determine the component to display. A router has no routes until you configure it. The following example creates five route definitions, configures the router via the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method, and adds the result to the `imports` array of the `AppModule`'. ``` const appRoutes: Routes = [ { path: 'crisis-center', component: CrisisListComponent }, { path: 'hero/:id', component: HeroDetailComponent }, { path: 'heroes', component: HeroListComponent, data: { title: 'Heroes List' } }, { path: '', redirectTo: '/heroes', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot( appRoutes, { enableTracing: true } // <-- debugging purposes only ) // other imports here ], ... }) export class AppModule { } ``` The `appRoutes` array of routes describes how to navigate. Pass it to the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method in the module `imports` to configure the router. Each `[Route](../api/router/route)` maps a URL `path` to a component. There are no leading slashes in the path. The router parses and builds the final URL for you, which lets you use both relative and absolute paths when navigating between application views. The `:id` in the second route is a token for a route parameter. In a URL such as `/hero/42`, "42" is the value of the `id` parameter. The corresponding `HeroDetailComponent` uses that value to find and present the hero whose `id` is 42. The `data` property in the third route is a place to store arbitrary data associated with this specific route. The data property is accessible within each activated route. Use it to store items such as page titles, breadcrumb text, and other read-only, static data. Use the [resolve guard](router-tutorial-toh#resolve-guard) to retrieve dynamic data. The empty path in the fourth route represents the default path for the application —the place to go when the path in the URL is empty, as it typically is at the start. This default route redirects to the route for the `/heroes` URL and, therefore, displays the `HeroesListComponent`. If you need to see what events are happening during the navigation lifecycle, there is the `enableTracing` option as part of the router's default configuration. This outputs each router event that took place during each navigation lifecycle to the browser console. Use `enableTracing` only for debugging purposes. You set the `enableTracing: true` option in the object passed as the second argument to the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` method. Router outlet ------------- The `[RouterOutlet](../api/router/routeroutlet)` is a directive from the router library that is used like a component. It acts as a placeholder that marks the spot in the template where the router should display the components for that outlet. ``` <router-outlet></router-outlet> <!-- Routed components go here --> ``` Given the preceding configuration, when the browser URL for this application becomes `/heroes`, the router matches that URL to the route path `/heroes` and displays the `HeroListComponent` as a sibling element to the `[RouterOutlet](../api/router/routeroutlet)` that you've placed in the host component's template. Router links ------------ To navigate as a result of some user action such as the click of an anchor tag, use `[RouterLink](../api/router/routerlink)`. Consider the following template: ``` <h1>Angular Router</h1> <nav> <a routerLink="/crisis-center" routerLinkActive="active" ariaCurrentWhenActive="page">Crisis Center</a> <a routerLink="/heroes" routerLinkActive="active" ariaCurrentWhenActive="page">Heroes</a> </nav> <router-outlet></router-outlet> ``` The `[RouterLink](../api/router/routerlink)` directives on the anchor tags give the router control over those elements. The navigation paths are fixed, so you can assign a string as a one-time binding to the `[routerLink](../api/router/routerlink)`. Had the navigation path been more dynamic, you could have bound to a template expression that returned an array of route link parameters; that is, the [link parameters array](router#link-parameters-array). The router resolves that array into a complete URL. Active router links ------------------- The `[RouterLinkActive](../api/router/routerlinkactive)` directive toggles CSS classes for active `[RouterLink](../api/router/routerlink)` bindings based on the current `[RouterState](../api/router/routerstate)`. On each anchor tag, you see a [property binding](property-binding) to the `[RouterLinkActive](../api/router/routerlinkactive)` directive that looks like ``` routerLinkActive="..." ``` The template expression to the right of the equal sign, `=`, contains a space-delimited string of CSS classes that the Router adds when this link is active and removes when the link is inactive. You set the `[RouterLinkActive](../api/router/routerlinkactive)` directive to a string of classes such as `[routerLinkActive](../api/router/routerlinkactive)="active fluffy"` or bind it to a component property that returns such a string. For example, ``` [routerLinkActive]="someStringProperty" ``` Active route links cascade down through each level of the route tree, so parent and child router links can be active at the same time. To override this behavior, bind to the `[routerLinkActiveOptions]` input binding with the `{ exact: true }` expression. By using `{ exact: true }`, a given `[RouterLink](../api/router/routerlink)` is only active if its URL is an exact match to the current URL. `[RouterLinkActive](../api/router/routerlinkactive)` also allows you to easily apply the `aria-current` attribute to the active element, thus providing a more accessible experience for all users. For more information see the Accessibility Best Practices [Active links identification section](accessibility#active-links-identification). Router state ------------ After the end of each successful navigation lifecycle, the router builds a tree of `[ActivatedRoute](../api/router/activatedroute)` objects that make up the current state of the router. You can access the current `[RouterState](../api/router/routerstate)` from anywhere in the application using the `[Router](../api/router/router)` service and the `routerState` property. Each `[ActivatedRoute](../api/router/activatedroute)` in the `[RouterState](../api/router/routerstate)` provides methods to traverse up and down the route tree to get information from parent, child, and sibling routes. Activated route --------------- The route path and parameters are available through an injected router service called the [ActivatedRoute](../api/router/activatedroute). It has a great deal of useful information including: | Property | Details | | --- | --- | | `url` | An `Observable` of the route paths, represented as an array of strings for each part of the route path. | | `data` | An `Observable` that contains the `data` object provided for the route. Also contains any resolved values from the [resolve guard](router-tutorial-toh#resolve-guard). | | `params` | An `Observable` that contains the required and [optional parameters](router-tutorial-toh#optional-route-parameters) specific to the route. | | `paramMap` | An `Observable` that contains a [map](../api/router/parammap) of the required and [optional parameters](router-tutorial-toh#optional-route-parameters) specific to the route. The map supports retrieving single and multiple values from the same parameter. | | `queryParamMap` | An `Observable` that contains a [map](../api/router/parammap) of the [query parameters](router-tutorial-toh#query-parameters) available to all routes. The map supports retrieving single and multiple values from the query parameter. | | `queryParams` | An `Observable` that contains the [query parameters](router-tutorial-toh#query-parameters) available to all routes. | | `fragment` | An `Observable` of the URL [fragment](router-tutorial-toh#fragment) available to all routes. | | `outlet` | The name of the `[RouterOutlet](../api/router/routeroutlet)` used to render the route. For an unnamed outlet, the outlet name is primary. | | `routeConfig` | The route configuration used for the route that contains the origin path. | | `parent` | The route's parent `[ActivatedRoute](../api/router/activatedroute)` when this route is a [child route](router-tutorial-toh#child-routing-component). | | `firstChild` | Contains the first `[ActivatedRoute](../api/router/activatedroute)` in the list of this route's child routes. | | `children` | Contains all the [child routes](router-tutorial-toh#child-routing-component) activated under the current route. | Router events ------------- During each navigation, the `[Router](../api/router/router)` emits navigation events through the `[Router.events](../api/router/router#events)` property. These events are shown in the following table. | Router event | Details | | --- | --- | | [`NavigationStart`](../api/router/navigationstart) | Triggered when navigation starts. | | [`RouteConfigLoadStart`](../api/router/routeconfigloadstart) | Triggered before the `[Router](../api/router/router)` [lazy loads](router-tutorial-toh#asynchronous-routing) a route configuration. | | [`RouteConfigLoadEnd`](../api/router/routeconfigloadend) | Triggered after a route has been lazy loaded. | | [`RoutesRecognized`](../api/router/routesrecognized) | Triggered when the Router parses the URL and the routes are recognized. | | [`GuardsCheckStart`](../api/router/guardscheckstart) | Triggered when the Router begins the Guards phase of routing. | | [`ChildActivationStart`](../api/router/childactivationstart) | Triggered when the Router begins activating a route's children. | | [`ActivationStart`](../api/router/activationstart) | Triggered when the Router begins activating a route. | | [`GuardsCheckEnd`](../api/router/guardscheckend) | Triggered when the Router finishes the Guards phase of routing successfully. | | [`ResolveStart`](../api/router/resolvestart) | Triggered when the Router begins the Resolve phase of routing. | | [`ResolveEnd`](../api/router/resolveend) | Triggered when the Router finishes the Resolve phase of routing successfully. | | [`ChildActivationEnd`](../api/router/childactivationend) | Triggered when the Router finishes activating a route's children. | | [`ActivationEnd`](../api/router/activationend) | Triggered when the Router finishes activating a route. | | [`NavigationEnd`](../api/router/navigationend) | Triggered when navigation ends successfully. | | [`NavigationCancel`](../api/router/navigationcancel) | Triggered when navigation is canceled. This can happen when a [Route Guard](router-tutorial-toh#guards) returns false during navigation, or redirects by returning a `[UrlTree](../api/router/urltree)`. | | [`NavigationError`](../api/router/navigationerror) | Triggered when navigation fails due to an unexpected error. | | [`Scroll`](../api/router/scroll) | Represents a scrolling event. | When you enable the `enableTracing` option, Angular logs these events to the console. For an example of filtering router navigation events, see the [router section](observables-in-angular#router) of the [Observables in Angular](observables-in-angular) guide. Router terminology ------------------ Here are the key `[Router](../api/router/router)` terms and their meanings: | Router part | Details | | --- | --- | | `[Router](../api/router/router)` | Displays the application component for the active URL. Manages navigation from one component to the next. | | `[RouterModule](../api/router/routermodule)` | A separate NgModule that provides the necessary service providers and directives for navigating through application views. | | `[Routes](../api/router/routes)` | Defines an array of Routes, each mapping a URL path to a component. | | `[Route](../api/router/route)` | Defines how the router should navigate to a component based on a URL pattern. Most routes consist of a path and a component type. | | `[RouterOutlet](../api/router/routeroutlet)` | The directive (`<[router-outlet](../api/router/routeroutlet)>`) that marks where the router displays a view. | | `[RouterLink](../api/router/routerlink)` | The directive for binding a clickable HTML element to a route. Clicking an element with a `[routerLink](../api/router/routerlink)` directive that's bound to a *string* or a *link parameters array* triggers a navigation. | | `[RouterLinkActive](../api/router/routerlinkactive)` | The directive for adding/removing classes from an HTML element when an associated `[routerLink](../api/router/routerlink)` contained on or inside the element becomes active/inactive. It can also set the `aria-current` of an active link for better accessibility. | | `[ActivatedRoute](../api/router/activatedroute)` | A service that's provided to each route component that contains route specific information such as route parameters, static data, resolve data, global query parameters, and the global fragment. | | `[RouterState](../api/router/routerstate)` | The current state of the router including a tree of the currently activated routes together with convenience methods for traversing the route tree. | | Link parameters array | An array that the router interprets as a routing instruction. You can bind that array to a `[RouterLink](../api/router/routerlink)` or pass the array as an argument to the `Router.navigate` method. | | Routing component | An Angular component with a `[RouterOutlet](../api/router/routeroutlet)` that displays views based on router navigations. | Last reviewed on Mon Feb 28 2022
programming_docs
angular Workspace and project file structure Workspace and project file structure ==================================== You develop applications in the context of an Angular [workspace](glossary#workspace). A workspace contains the files for one or more [projects](glossary#project). A project is the set of files that comprise a standalone application or a shareable library. The Angular CLI `ng new` command creates a workspace. ``` ng new <my-project> ``` When you run this command, the CLI installs the necessary Angular npm packages and other dependencies in a new workspace, with a root-level application named *my-project*. The workspace root folder contains various support and configuration files, and a README file with generated descriptive text that you can customize. By default, `ng new` creates an initial skeleton application at the root level of the workspace, along with its end-to-end tests. The skeleton is for a simple Welcome application that is ready to run and easy to modify. The root-level application has the same name as the workspace, and the source files reside in the `src/` subfolder of the workspace. This default behavior is suitable for a typical "multi-repo" development style where each application resides in its own workspace. Beginners and intermediate users are encouraged to use `ng new` to create a separate workspace for each application. Angular also supports workspaces with [multiple projects](file-structure#multiple-projects). This type of development environment is suitable for advanced users who are developing [shareable libraries](glossary#library), and for enterprises that use a "monorepo" development style, with a single repository and global configuration for all Angular projects. To set up a monorepo workspace, you should skip the creating the root application. See [Setting up for a multi-project workspace](file-structure#multiple-projects) below. Workspace configuration files ----------------------------- All projects within a workspace share a [CLI configuration context](workspace-config). The top level of the workspace contains workspace-wide configuration files, configuration files for the root-level application, and subfolders for the root-level application source and test files. | Workspace configuration files | Purpose | | --- | --- | | `.editorconfig` | Configuration for code editors. See [EditorConfig](https://editorconfig.org). | | `.gitignore` | Specifies intentionally untracked files that [Git](https://git-scm.com) should ignore. | | `README.md` | Introductory documentation for the root application. | | `angular.json` | CLI configuration defaults for all projects in the workspace, including configuration options for build, serve, and test tools that the CLI uses, such as [Karma](https://karma-runner.github.io), and [Protractor](https://www.protractortest.org). For details, see [Angular Workspace Configuration](workspace-config). | | `package.json` | Configures [npm package dependencies](npm-packages) that are available to all projects in the workspace. See [npm documentation](https://docs.npmjs.com/files/package.json) for the specific format and contents of this file. | | `package-lock.json` | Provides version information for all packages installed into `node_modules` by the npm client. See [npm documentation](https://docs.npmjs.com/files/package-lock.json) for details. If you use the yarn client, this file will be [yarn.lock](https://yarnpkg.com/lang/en/docs/yarn-lock) instead. | | `src/` | Source files for the root-level application project. | | `node_modules/` | Provides [npm packages](npm-packages) to the entire workspace. Workspace-wide `node_modules` dependencies are visible to all projects. | | `tsconfig.json` | The base [TypeScript](https://www.typescriptlang.org) configuration for projects in the workspace. All other configuration files inherit from this base file. For more information, see the [Configuration inheritance with extends](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html#configuration-inheritance-with-extends) section of the TypeScript documentation. | Application project files ------------------------- By default, the CLI command `ng new my-app` creates a workspace folder named "my-app" and generates a new application skeleton in a `src/` folder at the top level of the workspace. A newly generated application contains source files for a root module, with a root component and template. When the workspace file structure is in place, you can use the `ng generate` command on the command line to add functionality and data to the application. This initial root-level application is the *default app* for CLI commands (unless you change the default after creating [additional apps](file-structure#multiple-projects)). > Besides using the CLI on the command line, you can also manipulate files directly in the application's source folder and configuration files. > > For a single-application workspace, the `src` subfolder of the workspace contains the source files (application logic, data, and assets) for the root application. For a multi-project workspace, additional projects in the `projects` folder contain a `project-name/src/` subfolder with the same structure. ### Application source files Files at the top level of `src/` support testing and running your application. Subfolders contain the application source and application-specific configuration. | Application support files | Purpose | | --- | --- | | `app/` | Contains the component files in which your application logic and data are defined. See details [below](file-structure#app-src). | | `assets/` | Contains image and other asset files to be copied as-is when you build your application. | | `favicon.ico` | An icon to use for this application in the bookmark bar. | | `index.html` | The main HTML page that is served when someone visits your site. The CLI automatically adds all JavaScript and CSS files when building your app, so you typically don't need to add any `<script>` or `<link>` tags here manually. | | `main.ts` | The main entry point for your application. Compiles the application with the [JIT compiler](glossary#jit) and bootstraps the application's root module (AppModule) to run in the browser. You can also use the [AOT compiler](aot-compiler) without changing any code by appending the `--aot` flag to the CLI `build` and `serve` commands. | | `styles.sass` | Lists CSS files that supply styles for a project. The extension reflects the style preprocessor you have configured for the project. | > New Angular projects use strict mode by default. If this is not desired you can opt out when creating the project. For more information, see [Strict mode](strict-mode). > > Inside the `src` folder, the `app` folder contains your project's logic and data. Angular components, templates, and styles go here. | `src/app/` files | Purpose | | --- | --- | | `app/app.component.ts` | Defines the logic for the application's root component, named `AppComponent`. The view associated with this root component becomes the root of the [view hierarchy](glossary#view-hierarchy) as you add components and services to your application. | | `app/app.component.html` | Defines the HTML template associated with the root `AppComponent`. | | `app/app.component.css` | Defines the base CSS stylesheet for the root `AppComponent`. | | `app/app.component.spec.ts` | Defines a unit test for the root `AppComponent`. | | `app/app.module.ts` | Defines the root module, named `AppModule`, that tells Angular how to assemble the application. Initially declares only the `AppComponent`. As you add more components to the app, they must be declared here. | ### Application configuration files The application-specific configuration files for the root application reside at the workspace root level. For a multi-project workspace, project-specific configuration files are in the project root, under `projects/project-name/`. Project-specific [TypeScript](https://www.typescriptlang.org) configuration files inherit from the workspace-wide `tsconfig.json`. | Application-specific configuration files | Purpose | | --- | --- | | `tsconfig.app.json` | Application-specific [TypeScript](https://www.typescriptlang.org) configuration, including TypeScript and Angular template compiler options. See [TypeScript Configuration](typescript-configuration) and [Angular Compiler Options](angular-compiler-options). | | `tsconfig.spec.json` | [TypeScript](https://www.typescriptlang.org) configuration for the application tests. See [TypeScript Configuration](typescript-configuration). | Multiple projects ----------------- A multi-project workspace is suitable for an enterprise that uses a single repository and global configuration for all Angular projects (the "monorepo" model). A multi-project workspace also supports library development. ### Setting up for a multi-project workspace If you intend to have multiple projects in a workspace, you can skip the initial application generation when you create the workspace, and give the workspace a unique name. The following command creates a workspace with all of the workspace-wide configuration files, but no root-level application. ``` ng new my-workspace --no-create-application ``` You can then generate applications and libraries with names that are unique within the workspace. ``` cd my-workspace ng generate application my-first-app ``` ### Multiple project file structure The first explicitly generated application goes into the `projects` folder along with all other projects in the workspace. Newly generated libraries are also added under `projects`. When you create projects this way, the file structure of the workspace is entirely consistent with the structure of the [workspace configuration file](workspace-config), `angular.json`. ``` my-workspace … (workspace-wide config files) projects (generated applications and libraries) my-first-app --(an explicitly generated application) … --(application-specific config) src --(source and support files for application) my-lib --(a generated library) … --(library-specific config) src --source and support files for library) ``` Library project files --------------------- When you generate a library using the CLI (with a command such as `ng generate library my-lib`), the generated files go into the `projects/` folder of the workspace. For more information about creating your own libraries, see [Creating Libraries](creating-libraries). Libraries unlike applications have their own `package.json` configuration file. Under the `projects/` folder, the `my-lib` folder contains your library code. | Library source files | Purpose | | --- | --- | | `src/lib` | Contains your library project's logic and data. Like an application project, a library project can contain components, services, modules, directives, and pipes. | | `src/public-api.ts` | Specifies all files that are exported from your library. | | `ng-package.json` | Configuration file used by [ng-packagr](https://github.com/ng-packagr/ng-packagr) for building your library. | | `package.json` | Configures [npm package dependencies](npm-packages) that are required for this library. | | `tsconfig.lib.json` | Library-specific [TypeScript](https://www.typescriptlang.org) configuration, including TypeScript and Angular template compiler options. See [TypeScript Configuration](typescript-configuration). | | `tsconfig.lib.prod.json` | Library-specific [TypeScript](https://www.typescriptlang.org) configuration that is used when building the library in production mode. | | `tsconfig.spec.json` | [TypeScript](https://www.typescriptlang.org) configuration for the library tests. See [TypeScript Configuration](typescript-configuration). | Last reviewed on Mon Oct 24 2022 angular Lifecycle hooks Lifecycle hooks =============== A component instance has a lifecycle that starts when Angular instantiates the component class and renders the component view along with its child views. The lifecycle continues with change detection, as Angular checks to see when data-bound properties change, and updates both the view and the component instance as needed. The lifecycle ends when Angular destroys the component instance and removes its rendered template from the DOM. Directives have a similar lifecycle, as Angular creates, updates, and destroys instances in the course of execution. Your application can use [lifecycle hook methods](glossary#lifecycle-hook "Definition of lifecycle hook") to tap into key events in the lifecycle of a component or directive to initialize new instances, initiate change detection when needed, respond to updates during change detection, and clean up before deletion of instances. Prerequisites ------------- Before working with lifecycle hooks, you should have a basic understanding of the following: * [TypeScript programming](https://www.typescriptlang.org) * Angular app-design fundamentals, as described in [Angular Concepts](architecture "Introduction to fundamental app-design concepts") Responding to lifecycle events ------------------------------ Respond to events in the lifecycle of a component or directive by implementing one or more of the *lifecycle hook* interfaces in the Angular `core` library. The hooks give you the opportunity to act on a component or directive instance at the appropriate moment, as Angular creates, updates, or destroys that instance. Each interface defines the prototype for a single hook method, whose name is the interface name prefixed with `ng`. For example, the `[OnInit](../api/core/oninit)` interface has a hook method named `ngOnInit()`. If you implement this method in your component or directive class, Angular calls it shortly after checking the input properties for that component or directive for the first time. ``` @Directive({selector: '[appPeekABoo]'}) export class PeekABooDirective implements OnInit { constructor(private logger: LoggerService) { } // implement OnInit's `ngOnInit` method ngOnInit() { this.logIt('OnInit'); } logIt(msg: string) { this.logger.log(`#${nextId++} ${msg}`); } } ``` You don't have to implement all (or any) of the lifecycle hooks, just the ones you need. ### Lifecycle event sequence After your application instantiates a component or directive by calling its constructor, Angular calls the hook methods you have implemented at the appropriate point in the lifecycle of that instance. Angular executes hook methods in the following sequence. Use them to perform the following kinds of operations. | Hook method | Purpose | Timing | | --- | --- | --- | | `ngOnChanges()` | Respond when Angular sets or resets data-bound input properties. The method receives a `[SimpleChanges](../api/core/simplechanges)` object of current and previous property values. **NOTE**: This happens frequently, so any operation you perform here impacts performance significantly. See details in [Using change detection hooks](lifecycle-hooks#onchanges) in this document. | Called before `ngOnInit()` (if the component has bound inputs) and whenever one or more data-bound input properties change. **NOTE**: If your component has no inputs or you use it without providing any inputs, the framework will not call `ngOnChanges()`. | | `ngOnInit()` | Initialize the directive or component after Angular first displays the data-bound properties and sets the directive or component's input properties. See details in [Initializing a component or directive](lifecycle-hooks#oninit) in this document. | Called once, after the first `ngOnChanges()`. `ngOnInit()` is still called even when `ngOnChanges()` is not (which is the case when there are no template-bound inputs). | | `ngDoCheck()` | Detect and act upon changes that Angular can't or won't detect on its own. See details and example in [Defining custom change detection](lifecycle-hooks#docheck) in this document. | Called immediately after `ngOnChanges()` on every change detection run, and immediately after `ngOnInit()` on the first run. | | `ngAfterContentInit()` | Respond after Angular projects external content into the component's view, or into the view that a directive is in. See details and example in [Responding to changes in content](lifecycle-hooks#aftercontent) in this document. | Called *once* after the first `ngDoCheck()`. | | `ngAfterContentChecked()` | Respond after Angular checks the content projected into the directive or component. See details and example in [Responding to projected content changes](lifecycle-hooks#aftercontent) in this document. | Called after `ngAfterContentInit()` and every subsequent `ngDoCheck()`. | | `ngAfterViewInit()` | Respond after Angular initializes the component's views and child views, or the view that contains the directive. See details and example in [Responding to view changes](lifecycle-hooks#afterview) in this document. | Called *once* after the first `ngAfterContentChecked()`. | | `ngAfterViewChecked()` | Respond after Angular checks the component's views and child views, or the view that contains the directive. | Called after the `ngAfterViewInit()` and every subsequent `ngAfterContentChecked()`. | | `ngOnDestroy()` | Cleanup just before Angular destroys the directive or component. Unsubscribe Observables and detach event handlers to avoid memory leaks. See details in [Cleaning up on instance destruction](lifecycle-hooks#ondestroy) in this document. | Called immediately before Angular destroys the directive or component. | ### Lifecycle example set The live example demonstrates the use of lifecycle hooks through a series of exercises presented as components under the control of the root `AppComponent`. In each case a *parent* component serves as a test rig for a *child* component that illustrates one or more of the lifecycle hook methods. The following table lists the exercises with brief descriptions. The sample code is also used to illustrate specific tasks in the following sections. | Component | Details | | --- | --- | | [Peek-a-boo](lifecycle-hooks#peek-a-boo) | Demonstrates every lifecycle hook. Each hook method writes to the on-screen log. | | [Spy](lifecycle-hooks#spy) | Shows how to use lifecycle hooks with a custom directive. The `SpyDirective` implements the `ngOnInit()` and `ngOnDestroy()` hooks, and uses them to watch and report when an element goes in or out of the current view. | | [OnChanges](lifecycle-hooks#onchanges) | Demonstrates how Angular calls the `ngOnChanges()` hook every time one of the component input properties changes, and shows how to interpret the `changes` object passed to the hook method. | | [DoCheck](lifecycle-hooks#docheck) | Implements the `ngDoCheck()` method with custom change detection. Watch the hook post changes to a log to see how often Angular calls this hook. | | [AfterView](lifecycle-hooks#afterview) | Shows what Angular means by a [view](glossary#view "Definition of view."). Demonstrates the `ngAfterViewInit()` and `ngAfterViewChecked()` hooks. | | [AfterContent](lifecycle-hooks#aftercontent) | Shows how to project external content into a component and how to distinguish projected content from a component's view children. Demonstrates the `ngAfterContentInit()` and `ngAfterContentChecked()` hooks. | | [Counter](lifecycle-hooks#counter) | Demonstrates a combination of a component and a directive, each with its own hooks. | Initializing a component or directive ------------------------------------- Use the `ngOnInit()` method to perform the following initialization tasks. | Initialization tasks | Details | | --- | --- | | Perform complex initializations outside of the constructor | Components should be cheap and safe to construct. You should not, for example, fetch data in a component constructor. You shouldn't worry that a new component will try to contact a remote server when created under test or before you decide to display it. An `ngOnInit()` is a good place for a component to fetch its initial data. For an example, see the [Tour of Heroes tutorial](../tutorial/tour-of-heroes/toh-pt4#oninit). | | Set up the component after Angular sets the input properties | Constructors should do no more than set the initial local variables to simple values. Keep in mind that a directive's data-bound input properties are not set until *after construction*. If you need to initialize the directive based on those properties, set them when `ngOnInit()` runs. The `ngOnChanges()` method is your first opportunity to access those properties. Angular calls `ngOnChanges()` before `ngOnInit()`, but also many times after that. It only calls `ngOnInit()` once. | Cleaning up on instance destruction ----------------------------------- Put cleanup logic in `ngOnDestroy()`, the logic that must run before Angular destroys the directive. This is the place to free resources that won't be garbage-collected automatically. You risk memory leaks if you neglect to do so. * Unsubscribe from Observables and DOM events * Stop interval timers * Unregister all callbacks that the directive registered with global or application services The `ngOnDestroy()` method is also the time to notify another part of the application that the component is going away. General examples ---------------- The following examples demonstrate the call sequence and relative frequency of the various lifecycle events, and how the hooks can be used separately or together for components and directives. ### Sequence and frequency of all lifecycle events To show how Angular calls the hooks in the expected order, the `PeekABooComponent` demonstrates all of the hooks in one component. In practice you would rarely, if ever, implement all of the interfaces the way this demo does. The following snapshot reflects the state of the log after the user clicked the **Create…** button and then the **Destroy…** button. The sequence of log messages follows the prescribed hook calling order: | Hook order | Log message | | --- | --- | | 1 | `[OnChanges](../api/core/onchanges)` | | 2 | `[OnInit](../api/core/oninit)` | | 3 | `[DoCheck](../api/core/docheck)` | | 4 | `[AfterContentInit](../api/core/aftercontentinit)` | | 5 | `[AfterContentChecked](../api/core/aftercontentchecked)` | | 6 | `[AfterViewInit](../api/core/afterviewinit)` | | 7 | `[AfterViewChecked](../api/core/afterviewchecked)` | | 8 | `[DoCheck](../api/core/docheck)` | | 9 | `[AfterContentChecked](../api/core/aftercontentchecked)` | | 10 | `[AfterViewChecked](../api/core/afterviewchecked)` | | 11 | `[OnDestroy](../api/core/ondestroy)` | > Notice that the log confirms that input properties (the `name` property in this case) have no assigned values at construction. The input properties are available to the `onInit()` method for further initialization. > > Had the user clicked the *Update Hero* button, the log would show another `[OnChanges](../api/core/onchanges)` and two more triplets of `[DoCheck](../api/core/docheck)`, `[AfterContentChecked](../api/core/aftercontentchecked)`, and `[AfterViewChecked](../api/core/afterviewchecked)`. Notice that these three hooks fire *often*, so it is important to keep their logic as lean as possible. ### Use directives to watch the DOM The `Spy` example demonstrates how to use the hook method for directives as well as components. The `SpyDirective` implements two hooks, `ngOnInit()` and `ngOnDestroy()`, to discover when a watched element is in the current view. This template applies the `SpyDirective` to a `<div>` in the `[ngFor](../api/common/ngfor)` *hero* repeater managed by the parent `SpyComponent`. The example does not perform any initialization or clean-up. It just tracks the appearance and disappearance of an element in the view by recording when the directive itself is instantiated and destroyed. A spy directive like this can provide insight into a DOM object that you cannot change directly. You can't access the implementation of a built-in `<div>`, or modify a third party component. You do have the option to watch these elements with a directive. The directive defines `ngOnInit()` and `ngOnDestroy()` hooks that log messages to the parent using an injected `LoggerService`. ``` let nextId = 1; // Spy on any element to which it is applied. // Usage: <div appSpy>...</div> @Directive({selector: '[appSpy]'}) export class SpyDirective implements OnInit, OnDestroy { private id = nextId++; constructor(private logger: LoggerService) { } ngOnInit() { this.logger.log(`Spy #${this.id} onInit`); } ngOnDestroy() { this.logger.log(`Spy #${this.id} onDestroy`); } } ``` Apply the spy to any built-in or component element, and see that it is initialized and destroyed at the same time as that element. Here it is attached to the repeated hero `<p>`: ``` <p *ngFor="let hero of heroes" appSpy> {{hero}} </p> ``` Each spy's creation and destruction marks the appearance and disappearance of the attached hero `<p>` with an entry in the *Hook Log*. Adding a hero results in a new hero `<p>`. The spy's `ngOnInit()` logs that event. The *Reset* button clears the `heroes` list. Angular removes all hero `<p>` elements from the DOM and destroys their spy directives at the same time. The spy's `ngOnDestroy()` method reports its last moments. ### Use component and directive hooks together In this example, a `CounterComponent` uses the `ngOnChanges()` method to log a change every time the parent component increments its input `counter` property. This example applies the `SpyDirective` from the previous example to the `CounterComponent` log, to watch the creation and destruction of log entries. Using change detection hooks ---------------------------- Angular calls the `ngOnChanges()` method of a component or directive whenever it detects changes to the ***input properties***. The *onChanges* example demonstrates this by monitoring the `[OnChanges](../api/core/onchanges)()` hook. ``` ngOnChanges(changes: SimpleChanges) { for (const propName in changes) { const chng = changes[propName]; const cur = JSON.stringify(chng.currentValue); const prev = JSON.stringify(chng.previousValue); this.changeLog.push(`${propName}: currentValue = ${cur}, previousValue = ${prev}`); } } ``` The `ngOnChanges()` method takes an object that maps each changed property name to a [SimpleChange](../api/core/simplechange) object holding the current and previous property values. This hook iterates over the changed properties and logs them. The example component, `OnChangesComponent`, has two input properties: `hero` and `power`. ``` @Input() hero!: Hero; @Input() power = ''; ``` The host `OnChangesParentComponent` binds to them as follows. ``` <on-changes [hero]="hero" [power]="power"></on-changes> ``` Here's the sample in action as the user makes changes. The log entries appear as the string value of the *power* property changes. Notice, however, that the `ngOnChanges()` method does not catch changes to `hero.name`. This is because Angular calls the hook only when the value of the input property changes. In this case, `hero` is the input property, and the value of the `hero` property is the *reference to the hero object*. The object reference did not change when the value of its own `name` property changed. ### Responding to view changes As Angular traverses the [view hierarchy](glossary#view-hierarchy "Definition of view hierarchy definition") during change detection, it needs to be sure that a change in a child does not attempt to cause a change in its own parent. Such a change would not be rendered properly, because of how [unidirectional data flow](glossary#unidirectional-data-flow "Definition") works. If you need to make a change that inverts the expected data flow, you must trigger a new change detection cycle to allow that change to be rendered. The examples illustrate how to make such changes safely. The *AfterView* sample explores the `[AfterViewInit](../api/core/afterviewinit)()` and `[AfterViewChecked](../api/core/afterviewchecked)()` hooks that Angular calls *after* it creates a component's child views. Here's a child view that displays a hero's name in an `<input>`: ``` @Component({ selector: 'app-child-view', template: ` <label for="hero-name">Hero name: </label> <input type="text" id="hero-name" [(ngModel)]="hero"> ` }) export class ChildViewComponent { hero = 'Magneta'; } ``` The `AfterViewComponent` displays this child view *within its template*: ``` template: ` <div>child view begins</div> <app-child-view></app-child-view> <div>child view ends</div> ` ``` The following hooks take action based on changing values *within the child view*, which can only be reached by querying for the child view using the property decorated with [@ViewChild](../api/core/viewchild). ``` export class AfterViewComponent implements AfterViewChecked, AfterViewInit { private prevHero = ''; // Query for a VIEW child of type `ChildViewComponent` @ViewChild(ChildViewComponent) viewChild!: ChildViewComponent; ngAfterViewInit() { // viewChild is set after the view has been initialized this.logIt('AfterViewInit'); this.doSomething(); } ngAfterViewChecked() { // viewChild is updated after the view has been checked if (this.prevHero === this.viewChild.hero) { this.logIt('AfterViewChecked (no change)'); } else { this.prevHero = this.viewChild.hero; this.logIt('AfterViewChecked'); this.doSomething(); } } // ... } ``` #### Wait before updating the view In this example, the `doSomething()` method updates the screen when the hero name exceeds 10 characters, but waits a tick before updating `comment`. ``` // This surrogate for real business logic sets the `comment` private doSomething() { const c = this.viewChild.hero.length > 10 ? "That's a long name" : ''; if (c !== this.comment) { // Wait a tick because the component's view has already been checked this.logger.tick_then(() => this.comment = c); } } ``` Both the `[AfterViewInit](../api/core/afterviewinit)()` and `[AfterViewChecked](../api/core/afterviewchecked)()` hooks fire after the component's view is composed. If you modify the code so that the hook updates the component's data-bound `comment` property immediately, you can see that Angular throws an error. The `LoggerService.tick_then()` statement postpones the log update for one turn of the browser's JavaScript cycle, which triggers a new change-detection cycle. #### Write lean hook methods to avoid performance problems When you run the *AfterView* sample, notice how frequently Angular calls `[AfterViewChecked](../api/core/afterviewchecked)()` - often when there are no changes of interest. Be careful about how much logic or computation you put into one of these methods. ### Responding to projected content changes *Content projection* is a way to import HTML content from outside the component and insert that content into the component's template in a designated spot. Identify content projection in a template by looking for the following constructs. * HTML between component element tags * The presence of `[<ng-content>](../api/core/ng-content)` tags in the component's template > AngularJS developers know this technique as *transclusion*. > > The *AfterContent* sample explores the `[AfterContentInit](../api/core/aftercontentinit)()` and `[AfterContentChecked](../api/core/aftercontentchecked)()` hooks that Angular calls *after* Angular projects external content into the component. Consider this variation on the [previous *AfterView*](lifecycle-hooks#afterview) example. This time, instead of including the child view within the template, it imports the content from the `AfterContentComponent` hook's parent. The following is the parent's template. ``` `<after-content> <app-child></app-child> </after-content>` ``` Notice that the `<app-child>` tag is tucked between the `<after-content>` tags. Never put content between a component's element tags *unless you intend to project that content into the component*. Now look at the component's template. ``` template: ` <div>projected content begins</div> <ng-content></ng-content> <div>projected content ends</div> ` ``` The `[<ng-content>](../api/core/ng-content)` tag is a *placeholder* for the external content. It tells Angular where to insert that content. In this case, the projected content is the `<app-child>` from the parent. #### Using AfterContent hooks *AfterContent* hooks are similar to the *AfterView* hooks. The key difference is in the child component. * The *AfterView* hooks concern `[ViewChildren](../api/core/viewchildren)`, the child components whose element tags appear *within* the component's template * The *AfterContent* hooks concern `[ContentChildren](../api/core/contentchildren)`, the child components that Angular projected into the component The following *AfterContent* hooks take action based on changing values in a *content child*, which can only be reached by querying for them using the property decorated with [@ContentChild](../api/core/contentchild). ``` export class AfterContentComponent implements AfterContentChecked, AfterContentInit { private prevHero = ''; comment = ''; // Query for a CONTENT child of type `ChildComponent` @ContentChild(ChildComponent) contentChild!: ChildComponent; ngAfterContentInit() { // contentChild is set after the content has been initialized this.logIt('AfterContentInit'); this.doSomething(); } ngAfterContentChecked() { // contentChild is updated after the content has been checked if (this.prevHero === this.contentChild.hero) { this.logIt('AfterContentChecked (no change)'); } else { this.prevHero = this.contentChild.hero; this.logIt('AfterContentChecked'); this.doSomething(); } } // ... } ``` This component's `doSomething()` method updates the component's data-bound `comment` property immediately. There's no need to [delay the update to ensure proper rendering](lifecycle-hooks#wait-a-tick "Delaying updates"). Angular calls both *AfterContent* hooks before calling either of the *AfterView* hooks. Angular completes composition of the projected content *before* finishing the composition of this component's view. There is a small window between the `AfterContent...` and `AfterView...` hooks that lets you modify the host view. Defining custom change detection -------------------------------- To monitor changes that occur where `ngOnChanges()` won't catch them, implement your own change check, as shown in the *DoCheck* example. This example shows how to use the `ngDoCheck()` hook to detect and act upon changes that Angular doesn't catch on its own. The *DoCheck* sample extends the *OnChanges* sample with the following `ngDoCheck()` hook: ``` ngDoCheck() { if (this.hero.name !== this.oldHeroName) { this.changeDetected = true; this.changeLog.push(`DoCheck: Hero name changed to "${this.hero.name}" from "${this.oldHeroName}"`); this.oldHeroName = this.hero.name; } if (this.power !== this.oldPower) { this.changeDetected = true; this.changeLog.push(`DoCheck: Power changed to "${this.power}" from "${this.oldPower}"`); this.oldPower = this.power; } if (this.changeDetected) { this.noChangeCount = 0; } else { // log that hook was called when there was no relevant change. const count = this.noChangeCount += 1; const noChangeMsg = `DoCheck called ${count}x when no change to hero or power`; if (count === 1) { // add new "no change" message this.changeLog.push(noChangeMsg); } else { // update last "no change" message this.changeLog[this.changeLog.length - 1] = noChangeMsg; } } this.changeDetected = false; } ``` This code inspects certain *values of interest*, capturing and comparing their current state against previous values. It writes a special message to the log when there are no substantive changes to the `hero` or the `power` so you can see how often `[DoCheck](../api/core/docheck)()` is called. The results are illuminating. While the `ngDoCheck()` hook can detect when the hero's `name` has changed, it is an expensive hook. This hook is called with enormous frequency —after *every* change detection cycle no matter where the change occurred. It's called over twenty times in this example before the user can do anything. Most of these initial checks are triggered by Angular's first rendering of *unrelated data elsewhere on the page*. Just moving the cursor into another `<input>` triggers a call. Relatively few calls reveal actual changes to pertinent data. If you use this hook, your implementation must be extremely lightweight or the user experience suffers. Last reviewed on Mon Feb 28 2022
programming_docs
angular Dependency injection in action Dependency injection in action ============================== This guide explores many of the features of dependency injection (DI) in Angular. > See the live example for a working example containing the code snippets in this guide. > > Multiple service instances (sandboxing) --------------------------------------- Sometimes you want multiple instances of a service at *the same level* of the component hierarchy. A good example is a service that holds state for its companion component instance. You need a separate instance of the service for each component. Each service has its own work-state, isolated from the service-and-state of a different component. This is called *sandboxing* because each service and component instance has its own sandbox to play in. In this example, `HeroBiosComponent` presents three instances of `HeroBioComponent`. ``` @Component({ selector: 'app-hero-bios', template: ` <app-hero-bio [heroId]="1"></app-hero-bio> <app-hero-bio [heroId]="2"></app-hero-bio> <app-hero-bio [heroId]="3"></app-hero-bio>`, providers: [HeroService] }) export class HeroBiosComponent { } ``` Each `HeroBioComponent` can edit a single hero's biography. `HeroBioComponent` relies on `HeroCacheService` to fetch, cache, and perform other persistence operations on that hero. ``` @Injectable() export class HeroCacheService { hero!: Hero; constructor(private heroService: HeroService) {} fetchCachedHero(id: number) { if (!this.hero) { this.hero = this.heroService.getHeroById(id); } return this.hero; } } ``` Three instances of `HeroBioComponent` can't share the same instance of `HeroCacheService`, as they'd be competing with each other to determine which hero to cache. Instead, each `HeroBioComponent` gets its *own* `HeroCacheService` instance by listing `HeroCacheService` in its metadata `providers` array. ``` @Component({ selector: 'app-hero-bio', template: ` <h4>{{hero.name}}</h4> <ng-content></ng-content> <textarea cols="25" [(ngModel)]="hero.description"></textarea>`, providers: [HeroCacheService] }) export class HeroBioComponent implements OnInit { @Input() heroId = 0; constructor(private heroCache: HeroCacheService) { } ngOnInit() { this.heroCache.fetchCachedHero(this.heroId); } get hero() { return this.heroCache.hero; } } ``` The parent `HeroBiosComponent` binds a value to `heroId`. `ngOnInit` passes that ID to the service, which fetches and caches the hero. The getter for the `hero` property pulls the cached hero from the service. The template displays this data-bound property. Find this example in live code and confirm that the three `HeroBioComponent` instances have their own cached hero data. Qualify dependency lookup with parameter decorators --------------------------------------------------- When a class requires a dependency, that dependency is added to the constructor as a parameter. When Angular needs to instantiate the class, it calls upon the DI framework to supply the dependency. By default, the DI framework searches for a provider in the injector hierarchy, starting at the component's local injector, and if necessary bubbling up through the injector tree until it reaches the root injector. * The first injector configured with a provider supplies the dependency (a service instance or value) to the constructor * If no provider is found in the root injector, the DI framework throws an error There are a number of options for modifying the default search behavior, using *parameter decorators* on the service-valued parameters of a class constructor. ### Make a dependency `@[Optional](../api/core/optional)` and limit search with `@[Host](../api/core/host)` Dependencies can be registered at any level in the component hierarchy. When a component requests a dependency, Angular starts with that component's injector and walks up the injector tree until it finds the first suitable provider. Angular throws an error if it can't find the dependency during that walk. In some cases, you need to limit the search or accommodate a missing dependency. You can modify Angular's search behavior with the `@[Host](../api/core/host)` and `@[Optional](../api/core/optional)` qualifying decorators on a service-valued parameter of the component's constructor. * The `@[Optional](../api/core/optional)` property decorator tells Angular to return null when it can't find the dependency * The `@[Host](../api/core/host)` property decorator stops the upward search at the *host component*. The host component is typically the component requesting the dependency. However, when this component is projected into a *parent* component, that parent component becomes the host. The following example covers this second case. These decorators can be used individually or together, as shown in the example. This `HeroBiosAndContactsComponent` is a revision of `HeroBiosComponent` which you looked at [above](dependency-injection-in-action#hero-bios-component). ``` @Component({ selector: 'app-hero-bios-and-contacts', template: ` <app-hero-bio [heroId]="1"> <app-hero-contact></app-hero-contact> </app-hero-bio> <app-hero-bio [heroId]="2"> <app-hero-contact></app-hero-contact> </app-hero-bio> <app-hero-bio [heroId]="3"> <app-hero-contact></app-hero-contact> </app-hero-bio>`, providers: [HeroService] }) export class HeroBiosAndContactsComponent { constructor(logger: LoggerService) { logger.logInfo('Creating HeroBiosAndContactsComponent'); } } ``` Focus on the template: ``` template: ` <app-hero-bio [heroId]="1"> <app-hero-contact></app-hero-contact> </app-hero-bio> <app-hero-bio [heroId]="2"> <app-hero-contact></app-hero-contact> </app-hero-bio> <app-hero-bio [heroId]="3"> <app-hero-contact></app-hero-contact> </app-hero-bio>`, ``` Now there's a new `<hero-contact>` element between the `<hero-bio>` tags. Angular *projects*, or *transcludes*, the corresponding `HeroContactComponent` into the `HeroBioComponent` view, placing it in the `[<ng-content>](../api/core/ng-content)` slot of the `HeroBioComponent` template. ``` template: ` <h4>{{hero.name}}</h4> <ng-content></ng-content> <textarea cols="25" [(ngModel)]="hero.description"></textarea>`, ``` The result is shown below, with the hero's telephone number from `HeroContactComponent` projected above the hero description. Here's `HeroContactComponent`, which demonstrates the qualifying decorators. ``` @Component({ selector: 'app-hero-contact', template: ` <div>Phone #: {{phoneNumber}} <span *ngIf="hasLogger">!!!</span></div>` }) export class HeroContactComponent { hasLogger = false; constructor( @Host() // limit to the host component's instance of the HeroCacheService private heroCache: HeroCacheService, @Host() // limit search for logger; hides the application-wide logger @Optional() // ok if the logger doesn't exist private loggerService?: LoggerService ) { if (loggerService) { this.hasLogger = true; loggerService.logInfo('HeroContactComponent can log!'); } } get phoneNumber() { return this.heroCache.hero.phone; } } ``` Focus on the constructor parameters. ``` @Host() // limit to the host component's instance of the HeroCacheService private heroCache: HeroCacheService, @Host() // limit search for logger; hides the application-wide logger @Optional() // ok if the logger doesn't exist private loggerService?: LoggerService ``` The `@[Host](../api/core/host)()` function decorating the `heroCache` constructor property ensures that you get a reference to the cache service from the parent `HeroBioComponent`. Angular throws an error if the parent lacks that service, even if a component higher in the component tree includes it. A second `@[Host](../api/core/host)()` function decorates the `loggerService` constructor property. The only `LoggerService` instance in the application is provided at the `AppComponent` level. The host `HeroBioComponent` doesn't have its own `LoggerService` provider. Angular throws an error if you haven't also decorated the property with `@[Optional](../api/core/optional)()`. When the property is marked as optional, Angular sets `loggerService` to null and the rest of the component adapts. Here's `HeroBiosAndContactsComponent` in action. If you comment out the `@[Host](../api/core/host)()` decorator, Angular walks up the injector ancestor tree until it finds the logger at the `AppComponent` level. The logger logic kicks in and the hero display updates with the "!!!" marker to indicate that the logger was found. If you restore the `@[Host](../api/core/host)()` decorator and comment out `@[Optional](../api/core/optional)`, the application throws an exception when it cannot find the required logger at the host component level. ``` EXCEPTION: No provider for LoggerService! (HeroContactComponent -> LoggerService) ``` ### Supply a custom provider with `@[Inject](../api/core/inject)` Using a custom provider allows you to provide a concrete implementation for implicit dependencies, such as built-in browser APIs. The following example uses an `[InjectionToken](../api/core/injectiontoken)` to provide the [localStorage](https://developer.mozilla.org/docs/Web/API/Window/localStorage) browser API as a dependency in the `BrowserStorageService`. ``` import { Inject, Injectable, InjectionToken } from '@angular/core'; export const BROWSER_STORAGE = new InjectionToken<Storage>('Browser Storage', { providedIn: 'root', factory: () => localStorage }); @Injectable({ providedIn: 'root' }) export class BrowserStorageService { constructor(@Inject(BROWSER_STORAGE) public storage: Storage) {} get(key: string) { return this.storage.getItem(key); } set(key: string, value: string) { this.storage.setItem(key, value); } remove(key: string) { this.storage.removeItem(key); } clear() { this.storage.clear(); } } ``` The `factory` function returns the `localStorage` property that is attached to the browser window object. The `[Inject](../api/core/inject)` decorator is a constructor parameter used to specify a custom provider of a dependency. This custom provider can now be overridden during testing with a mock API of `localStorage` instead of interacting with real browser APIs. ### Modify the provider search with `@[Self](../api/core/self)` and `@[SkipSelf](../api/core/skipself)` Providers can also be scoped by injector through constructor parameter decorators. The following example overrides the `BROWSER_STORAGE` token in the `[Component](../api/core/component)` class `providers` with the `sessionStorage` browser API. The same `BrowserStorageService` is injected twice in the constructor, decorated with `@[Self](../api/core/self)` and `@[SkipSelf](../api/core/skipself)` to define which injector handles the provider dependency. ``` import { Component, OnInit, Self, SkipSelf } from '@angular/core'; import { BROWSER_STORAGE, BrowserStorageService } from './storage.service'; @Component({ selector: 'app-storage', template: ` Open the inspector to see the local/session storage keys: <h3>Session Storage</h3> <button type="button" (click)="setSession()">Set Session Storage</button> <h3>Local Storage</h3> <button type="button" (click)="setLocal()">Set Local Storage</button> `, providers: [ BrowserStorageService, { provide: BROWSER_STORAGE, useFactory: () => sessionStorage } ] }) export class StorageComponent implements OnInit { constructor( @Self() private sessionStorageService: BrowserStorageService, @SkipSelf() private localStorageService: BrowserStorageService, ) { } ngOnInit() { } setSession() { this.sessionStorageService.set('hero', 'Dr Nice - Session'); } setLocal() { this.localStorageService.set('hero', 'Dr Nice - Local'); } } ``` Using the `@[Self](../api/core/self)` decorator, the injector only looks at the component's injector for its providers. The `@[SkipSelf](../api/core/skipself)` decorator allows you to skip the local injector and look up in the hierarchy to find a provider that satisfies this dependency. The `sessionStorageService` instance interacts with the `BrowserStorageService` using the `sessionStorage` browser API, while the `localStorageService` skips the local injector and uses the root `BrowserStorageService` that uses the `localStorage` browser API. Inject the component's DOM element ---------------------------------- Although developers strive to avoid it, many visual effects and third-party tools, such as jQuery, require DOM access. As a result, you might need to access a component's DOM element. To illustrate, here's a minimal version of `HighlightDirective` from the [Attribute Directives](attribute-directives) page. ``` import { Directive, ElementRef, HostListener, Input } from '@angular/core'; @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { @Input('appHighlight') highlightColor = ''; private el: HTMLElement; constructor(el: ElementRef) { this.el = el.nativeElement; } @HostListener('mouseenter') onMouseEnter() { this.highlight(this.highlightColor || 'cyan'); } @HostListener('mouseleave') onMouseLeave() { this.highlight(''); } private highlight(color: string) { this.el.style.backgroundColor = color; } } ``` The directive sets the background to a highlight color when the user mouses over the DOM element to which the directive is applied. Angular sets the constructor's `el` parameter to the injected `[ElementRef](../api/core/elementref)`. (An `[ElementRef](../api/core/elementref)` is a wrapper around a DOM element, whose `nativeElement` property exposes the DOM element for the directive to manipulate.) The sample code applies the directive's `appHighlight` attribute to two `<div>` tags, first without a value (yielding the default color) and then with an assigned color value. ``` <div id="highlight" class="di-component" appHighlight> <h3>Hero Bios and Contacts</h3> <div appHighlight="yellow"> <app-hero-bios-and-contacts></app-hero-bios-and-contacts> </div> </div> ``` The following image shows the effect of mousing over the `<hero-bios-and-contacts>` tag. ### Defining providers A dependency can't always be created by the default method of instantiating a class. You learned about some other methods in [Dependency Providers](dependency-injection-providers). The following `HeroOfTheMonthComponent` example demonstrates many of the alternatives and why you need them. It's visually simple: a few properties and the logs produced by a logger. The code behind it customizes how and where the DI framework provides dependencies. The use cases illustrate different ways to use the *provide* object literal to associate a definition object with a DI token. ``` import { Component, Inject } from '@angular/core'; import { DateLoggerService } from './date-logger.service'; import { Hero } from './hero'; import { HeroService } from './hero.service'; import { LoggerService } from './logger.service'; import { MinimalLogger } from './minimal-logger.service'; import { RUNNERS_UP, runnersUpFactory } from './runners-up'; @Component({ selector: 'app-hero-of-the-month', templateUrl: './hero-of-the-month.component.html', providers: [ { provide: Hero, useValue: someHero }, { provide: TITLE, useValue: 'Hero of the Month' }, { provide: HeroService, useClass: HeroService }, { provide: LoggerService, useClass: DateLoggerService }, { provide: MinimalLogger, useExisting: LoggerService }, { provide: RUNNERS_UP, useFactory: runnersUpFactory(2), deps: [Hero, HeroService] } ] }) export class HeroOfTheMonthComponent { logs: string[] = []; constructor( logger: MinimalLogger, public heroOfTheMonth: Hero, @Inject(RUNNERS_UP) public runnersUp: string, @Inject(TITLE) public title: string) { this.logs = logger.logs; logger.logInfo('starting up'); } } ``` The `providers` array shows how you might use the different provider-definition keys: `useValue`, `useClass`, `useExisting`, or `useFactory`. #### Value providers: `useValue` The `useValue` key lets you associate a fixed value with a DI token. Use this technique to provide *runtime configuration constants* such as website base addresses and feature flags. You can also use a value provider in a unit test to provide mock data in place of a production data service. The `HeroOfTheMonthComponent` example has two value providers. ``` { provide: Hero, useValue: someHero }, { provide: TITLE, useValue: 'Hero of the Month' }, ``` * The first provides an existing instance of the `Hero` class to use for the `Hero` token, rather than requiring the injector to create a new instance with `new` or use its own cached instance. Here, the token is the class itself. * The second specifies a literal string resource to use for the `TITLE` token. The `TITLE` provider token is *not* a class, but is instead a special kind of provider lookup key called an [injection token](dependency-injection-in-action#injection-token), represented by an `[InjectionToken](../api/core/injectiontoken)` instance. You can use an injection token for any kind of provider but it's particularly helpful when the dependency is a simple value like a string, a number, or a function. The value of a *value provider* must be defined before you specify it here. The title string literal is immediately available. The `someHero` variable in this example was set earlier in the file as shown below. You can't use a variable whose value will be defined later. ``` const someHero = new Hero(42, 'Magma', 'Had a great month!', '555-555-5555'); ``` Other types of providers can create their values *lazily*; that is, when they're needed for injection. #### Class providers: `useClass` The `useClass` provider key lets you create and return a new instance of the specified class. You can use this type of provider to substitute an *alternative implementation* for a common or default class. The alternative implementation could, for example, implement a different strategy, extend the default class, or emulate the behavior of the real class in a test case. The following code shows two examples in `HeroOfTheMonthComponent`. ``` { provide: HeroService, useClass: HeroService }, { provide: LoggerService, useClass: DateLoggerService }, ``` The first provider is the *de-sugared*, expanded form of the most typical case in which the class to be created (`HeroService`) is also the provider's dependency injection token. The short form is generally preferred; this long form makes the details explicit. The second provider substitutes `DateLoggerService` for `LoggerService`. `LoggerService` is already registered at the `AppComponent` level. When this child component requests `LoggerService`, it receives a `DateLoggerService` instance instead. > This component and its tree of child components receive `DateLoggerService` instance. Components outside the tree continue to receive the original `LoggerService` instance. > > `DateLoggerService` inherits from `LoggerService`; it appends the current date/time to each message: ``` @Injectable({ providedIn: 'root' }) export class DateLoggerService extends LoggerService { override logInfo(msg: any) { super.logInfo(stamp(msg)); } override logDebug(msg: any) { super.logInfo(stamp(msg)); } override logError(msg: any) { super.logError(stamp(msg)); } } function stamp(msg: any) { return msg + ' at ' + new Date(); } ``` #### Alias providers: `useExisting` The `useExisting` provider key lets you map one token to another. In effect, the first token is an *alias* for the service associated with the second token, creating two ways to access the same service object. ``` { provide: MinimalLogger, useExisting: LoggerService }, ``` You can use this technique to narrow an API through an aliasing interface. The following example shows an alias introduced for that purpose. Imagine that `LoggerService` had a large API, much larger than the actual three methods and a property. You might want to shrink that API surface to just the members you actually need. In this example, the `MinimalLogger` [class-interface](dependency-injection-in-action#class-interface) reduces the API to two members: ``` // Class used as a "narrowing" interface that exposes a minimal logger // Other members of the actual implementation are invisible export abstract class MinimalLogger { abstract logs: string[]; abstract logInfo: (msg: string) => void; } ``` The following example puts `MinimalLogger` to use in a simplified version of `HeroOfTheMonthComponent`. ``` @Component({ selector: 'app-hero-of-the-month', templateUrl: './hero-of-the-month.component.html', // TODO: move this aliasing, `useExisting` provider to the AppModule providers: [{ provide: MinimalLogger, useExisting: LoggerService }] }) export class HeroOfTheMonthComponent { logs: string[] = []; constructor(logger: MinimalLogger) { logger.logInfo('starting up'); } } ``` The `HeroOfTheMonthComponent` constructor's `logger` parameter is typed as `MinimalLogger`, so only the `logs` and `logInfo` members are visible in a TypeScript-aware editor. Behind the scenes, Angular sets the `logger` parameter to the full service registered under the `LoggingService` token, which happens to be the `DateLoggerService` instance that was [provided above](dependency-injection-in-action#useclass). > This is illustrated in the following image, which displays the logging date. > > #### Factory providers: `useFactory` The `useFactory` provider key lets you create a dependency object by calling a factory function, as in the following example. ``` { provide: RUNNERS_UP, useFactory: runnersUpFactory(2), deps: [Hero, HeroService] } ``` The injector provides the dependency value by invoking a factory function, that you provide as the value of the `useFactory` key. Notice that this form of provider has a third key, `deps`, which specifies dependencies for the `useFactory` function. Use this technique to create a dependency object with a factory function whose inputs are a combination of *injected services* and *local state*. The dependency object (returned by the factory function) is typically a class instance, but can be other things as well. In this example, the dependency object is a string of the names of the runners-up to the "Hero of the Month" contest. In the example, the local state is the number `2`, the number of runners-up that the component should show. The state value is passed as an argument to `runnersUpFactory()`. The `runnersUpFactory()` returns the *provider factory function*, which can use both the passed-in state value and the injected services `Hero` and `HeroService`. ``` export function runnersUpFactory(take: number) { return (winner: Hero, heroService: HeroService): string => /* ... */ } ``` The provider factory function (returned by `runnersUpFactory()`) returns the actual dependency object, the string of names. * The function takes a winning `Hero` and a `HeroService` as arguments. Angular supplies these arguments from injected values identified by the two *tokens* in the `deps` array. * The function returns the string of names, which Angular than injects into the `runnersUp` parameter of `HeroOfTheMonthComponent` > The function retrieves candidate heroes from the `HeroService`, takes `2` of them to be the runners-up, and returns their concatenated names. Look at the for the full source code. > > Provider token alternatives: class interface and 'InjectionToken' ----------------------------------------------------------------- Angular dependency injection is easiest when the provider token is a class that is also the type of the returned dependency object, or service. However, a token doesn't have to be a class and even when it is a class, it doesn't have to be the same type as the returned object. That's the subject of the next section. ### Class interface The previous *Hero of the Month* example used the `MinimalLogger` class as the token for a provider of `LoggerService`. ``` { provide: MinimalLogger, useExisting: LoggerService }, ``` `MinimalLogger` is an abstract class. ``` // Class used as a "narrowing" interface that exposes a minimal logger // Other members of the actual implementation are invisible export abstract class MinimalLogger { abstract logs: string[]; abstract logInfo: (msg: string) => void; } ``` An abstract class is usually a base class that you can extend. In this app, however there is no class that inherits from `MinimalLogger`. The `LoggerService` and the `DateLoggerService` could have inherited from `MinimalLogger`, or they could have implemented it instead, in the manner of an interface. But they did neither. `MinimalLogger` is used only as a dependency injection token. When you use a class this way, it's called a *class interface*. As mentioned in [Configuring dependency providers](dependency-injection-providers), an interface is not a valid DI token because it is a TypeScript artifact that doesn't exist at run time. Use this abstract class interface to get the strong typing of an interface, and also use it as a provider token in the way you would a normal class. A class interface should define *only* the members that its consumers are allowed to call. Such a narrowing interface helps decouple the concrete class from its consumers. > Using a class as an interface gives you the characteristics of an interface in a real JavaScript object. To minimize memory cost, however, the class should have *no implementation*. The `MinimalLogger` transpiles to this unoptimized, pre-minified JavaScript for a constructor function. > > > ``` > var MinimalLogger = (function () { > function MinimalLogger() {} > return MinimalLogger; > }()); > exports("MinimalLogger", MinimalLogger); > ``` > **NOTE**: It doesn't have any members. It never grows no matter how many members you add to the class, as long as those members are typed but not implemented. > > Look again at the TypeScript `MinimalLogger` class to confirm that it has no implementation. > > ### 'InjectionToken' objects Dependency objects can be simple values like dates, numbers, and strings, or shapeless objects like arrays and functions. Such objects don't have application interfaces and therefore aren't well represented by a class. They're better represented by a token that is both unique and symbolic, a JavaScript object that has a friendly name but won't conflict with another token that happens to have the same name. `[InjectionToken](../api/core/injectiontoken)` has these characteristics. You encountered them twice in the *Hero of the Month* example, in the *title* value provider and in the *runnersUp* factory provider. ``` { provide: TITLE, useValue: 'Hero of the Month' }, { provide: RUNNERS_UP, useFactory: runnersUpFactory(2), deps: [Hero, HeroService] } ``` You created the `TITLE` token like this: ``` import { InjectionToken } from '@angular/core'; export const TITLE = new InjectionToken<string>('title'); ``` The type parameter, while optional, conveys the dependency's type to developers and tooling. The token description is another developer aid. Inject into a derived class --------------------------- Take care when writing a component that inherits from another component. If the base component has injected dependencies, you must re-provide and re-inject them in the derived class and then pass them down to the base class through the constructor. In this contrived example, `SortedHeroesComponent` inherits from `HeroesBaseComponent` to display a *sorted* list of heroes. The `HeroesBaseComponent` can stand on its own. It demands its own instance of `HeroService` to get heroes and displays them in the order they arrive from the database. ``` @Component({ selector: 'app-unsorted-heroes', template: '<div *ngFor="let hero of heroes">{{hero.name}}</div>', providers: [HeroService] }) export class HeroesBaseComponent implements OnInit { constructor(private heroService: HeroService) { } heroes: Hero[] = []; ngOnInit() { this.heroes = this.heroService.getAllHeroes(); this.afterGetHeroes(); } // Post-process heroes in derived class override. protected afterGetHeroes() {} } ``` > #### Keep constructors simple > > Constructors should do little more than initialize variables. This rule makes the component safe to construct under test without fear that it will do something dramatic like talk to the server. That's why you call the `HeroService` from within the `ngOnInit` rather than the constructor. > > Users want to see the heroes in alphabetical order. Rather than modify the original component, subclass it and create a `SortedHeroesComponent` that sorts the heroes before presenting them. The `SortedHeroesComponent` lets the base class fetch the heroes. Unfortunately, Angular cannot inject the `HeroService` directly into the base class. You must provide the `HeroService` again for *this* component, then pass it down to the base class inside the constructor. ``` @Component({ selector: 'app-sorted-heroes', template: '<div *ngFor="let hero of heroes">{{hero.name}}</div>', providers: [HeroService] }) export class SortedHeroesComponent extends HeroesBaseComponent { constructor(heroService: HeroService) { super(heroService); } protected override afterGetHeroes() { this.heroes = this.heroes.sort((h1, h2) => h1.name < h2.name ? -1 : (h1.name > h2.name ? 1 : 0)); } } ``` Now take notice of the `afterGetHeroes()` method. Your first instinct might have been to create an `ngOnInit` method in `SortedHeroesComponent` and do the sorting there. But Angular calls the *derived* class's `ngOnInit` *before* calling the base class's `ngOnInit` so you'd be sorting the heroes array *before they arrived*. That produces a nasty error. Overriding the base class's `afterGetHeroes()` method solves the problem. These complications argue for *avoiding component inheritance*. Break circularities with a forward class reference (`forwardRef`) ----------------------------------------------------------------- The order of class declaration matters in TypeScript. You can't refer directly to a class until it's been defined. This isn't usually a problem, especially if you adhere to the recommended *one class per file* rule. But sometimes circular references are unavoidable. You're in a bind when class 'A' refers to class 'B' and 'B' refers to 'A'. One of them has to be defined first. The Angular `[forwardRef](../api/core/forwardref)()` function creates an *indirect* reference that Angular can resolve later. The *Parent Finder* sample is full of circular class references that are impossible to break. You face this dilemma when a class makes *a reference to itself* as does `AlexComponent` in its `providers` array. The `providers` array is a property of the `@[Component](../api/core/component)()` decorator function which must appear *above* the class definition. Break the circularity with `[forwardRef](../api/core/forwardref)`. ``` providers: [{ provide: Parent, useExisting: forwardRef(() => AlexComponent) }], ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular Deploy multiple locales Deploy multiple locales ======================= If `myapp` is the directory that contains the distributable files of your project, you typically make different versions available for different locales in locale directories. For example, your French version is located in the `myapp/fr` directory and the Spanish version is located in the `myapp/es` directory. The HTML `base` tag with the `href` attribute specifies the base URI, or URL, for relative links. If you set the `"localize"` option in [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file to `true` or to an array of locale IDs, the CLI adjusts the base `href` for each version of the application. To adjust the base `href` for each version of the application, the CLI adds the locale to the configured `"baseHref"`. Specify the `"baseHref"` for each locale in your [`angular.json`](workspace-config "Angular workspace configuration | Angular") workspace build configuration file. The following example displays `"baseHref"` set to an empty string. ``` "projects": { "angular.io-example": { // ... "i18n": { "sourceLocale": "en-US", "locales": { "fr": { "translation": "src/locale/messages.fr.xlf", "baseHref": "" } } }, "architect": { // ... } } } // ... } ``` Also, to declare the base `href` at compile time, use the CLI `--baseHref` option with [`ng build`](cli/build "ng build | CLI | Angular"). Configure a server ------------------ Typical deployment of multiple languages serve each language from a different subdirectory. Users are redirected to the preferred language defined in the browser using the `Accept-Language` HTTP header. If the user has not defined a preferred language, or if the preferred language is not available, then the server falls back to the default language. To change the language, change your current location to another subdirectory. The change of subdirectory often occurs using a menu implemented in the application. > For more information on how to deploy apps to a remote server, see [Deployment](deployment "Deployment | Angular"). > > ### Nginx example The following example displays an Nginx configuration. ``` http { # Browser preferred language detection (does NOT require # AcceptLanguageModule) map $http_accept_language $accept_language { ~*^de de; ~*^fr fr; ~*^en en; } # ... } server { listen 80; server_name localhost; root /www/data; # Fallback to default language if no preference defined by browser if ($accept_language ~ "^$") { set $accept_language "fr"; } # Redirect "/" to Angular application in the preferred language of the browser rewrite ^/$ /$accept_language permanent; # Everything under the Angular application is always redirected to Angular in the # correct language location ~ ^/(fr|de|en) { try_files $uri /$1/index.html?$args; } # ... } ``` ### Apache example The following example displays an Apache configuration. ``` <VirtualHost *:80> ServerName localhost DocumentRoot /www/data <Directory "/www/data"> RewriteEngine on RewriteBase / RewriteRule ^../index\.html$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (..) $1/index.html [L] RewriteCond %{HTTP:Accept-Language} ^de [NC] RewriteRule ^$ /de/ [R] RewriteCond %{HTTP:Accept-Language} ^en [NC] RewriteRule ^$ /en/ [R] RewriteCond %{HTTP:Accept-Language} !^en [NC] RewriteCond %{HTTP:Accept-Language} !^de [NC] RewriteRule ^$ /fr/ [R] </Directory> </VirtualHost> ``` Last reviewed on Mon Feb 28 2022 angular Deployment Deployment ========== When you are ready to deploy your Angular application to a remote server, you have various options for deployment. Simple deployment options ------------------------- Before fully deploying your application, you can test the process, build configuration, and deployed behavior by using one of these interim techniques. ### Building and serving from disk During development, you typically use the `ng serve` command to build, watch, and serve the application from local memory, using [webpack-dev-server](https://webpack.js.org/guides/development/#webpack-dev-server). When you are ready to deploy, however, you must use the `ng build` command to build the application and deploy the build artifacts elsewhere. Both `ng build` and `ng serve` clear the output folder before they build the project, but only the `ng build` command writes the generated build artifacts to the output folder. > The output folder is `dist/project-name/` by default. To output to a different folder, change the `outputPath` in `angular.json`. > > As you near the end of the development process, serving the contents of your output folder from a local web server can give you a better idea of how your application will behave when it is deployed to a remote server. You will need two terminals to get the live-reload experience. * On the first terminal, run the [`ng build` command](cli/build) in *watch* mode to compile the application to the `dist` folder. ``` ng build --watch ``` Like the `ng serve` command, this regenerates output files when source files change. * On the second terminal, install a web server (such as [lite-server](https://github.com/johnpapa/lite-server)), and run it against the output folder. For example: ``` lite-server --baseDir="dist/project-name" ``` The server will automatically reload your browser when new files are output. > This method is for development and testing only, and is not a supported or secure way of deploying an application. > > ### Automatic deployment with the CLI The Angular CLI command `ng deploy` (introduced in version 8.3.0) executes the `deploy` [CLI builder](cli-builder) associated with your project. A number of third-party builders implement deployment capabilities to different platforms. You can add any of them to your project by running `ng add [package name]`. When you add a package with deployment capability, it'll automatically update your workspace configuration (`angular.json` file) with a `deploy` section for the selected project. You can then use the `ng deploy` command to deploy that project. For example, the following command automatically deploys a project to Firebase. ``` ng add @angular/fire ng deploy ``` The command is interactive. In this case, you must have or create a Firebase account, and authenticate using that account. The command prompts you to select a Firebase project for deployment The command builds your application and uploads the production assets to Firebase. In the table below, you can find a list of packages which implement deployment functionality to different platforms. The `deploy` command for each package may require different command line options. You can read more by following the links associated with the package names below: | Deployment to | Package | | --- | --- | | [Firebase hosting](https://firebase.google.com/docs/hosting) | [`@angular/fire`](https://npmjs.org/package/@angular/fire) | | [Vercel](https://vercel.com/solutions/angular) | [`vercel init angular`](https://github.com/vercel/vercel/tree/main/examples/angular) | | [Netlify](https://www.netlify.com) | [`@netlify-builder/deploy`](https://npmjs.org/package/@netlify-builder/deploy) | | [GitHub pages](https://pages.github.com) | [`angular-cli-ghpages`](https://npmjs.org/package/angular-cli-ghpages) | | [NPM](https://npmjs.com) | [`ngx-deploy-npm`](https://npmjs.org/package/ngx-deploy-npm) | | [Amazon Cloud S3](https://aws.amazon.com/s3/?nc2=h_ql_prod_st_s3) | [`@jefiozie/ngx-aws-deploy`](https://www.npmjs.com/package/@jefiozie/ngx-aws-deploy) | If you're deploying to a self-managed server or there's no builder for your favorite cloud platform, you can either create a builder that allows you to use the `ng deploy` command, or read through this guide to learn how to manually deploy your application. ### Basic deployment to a remote server For the simplest deployment, create a production build and copy the output directory to a web server. 1. Start with the production build: ``` ng build ``` 2. Copy *everything* within the output folder (`dist/project-name/` by default) to a folder on the server. 3. Configure the server to redirect requests for missing files to `index.html`. Learn more about server-side redirects [below](deployment#fallback). This is the simplest production-ready deployment of your application. ### Deploy to GitHub Pages To deploy your Angular application to [GitHub Pages](https://help.github.com/articles/what-is-github-pages), complete the following steps: 1. [Create a GitHub repository](https://help.github.com/articles/create-a-repo) for your project. 2. Configure `git` in your local project by adding a remote that specifies the GitHub repository you created in previous step. GitHub provides these commands when you create the repository so that you can copy and paste them at your command prompt. The commands should be similar to the following, though GitHub fills in your project-specific settings for you: ``` git remote add origin https://github.com/your-username/your-project-name.git git branch -M main git push -u origin main ``` When you paste these commands from GitHub, they run automatically. 3. Create and check out a `git` branch named `gh-pages`. ``` git checkout -b gh-pages ``` 4. Build your project using the GitHub project name, with the Angular CLI command [`ng build`](cli/build) and the following options, where `your_project_name` is the name of the project that you gave the GitHub repository in step 1. Be sure to include the slashes on either side of your project name as in `/your_project_name/`. ``` ng build --output-path docs --base-href /your_project_name/ ``` 5. When the build is complete, make a copy of `docs/index.html` and name it `docs/404.html`. 6. Commit your changes and push. 7. On the GitHub project page, go to Settings and select the Pages option from the left sidebar to configure the site to [publish from the docs folder](https://docs.github.com/en/pages/getting-started-with-github-pages/configuring-a-publishing-source-for-your-github-pages-site#choosing-a-publishing-source). 8. Click Save. 9. Click on the GitHub Pages link at the top of the GitHub Pages section to see your deployed application. The format of the link is `https://<user_name>.github.io/<project_name>`. > Check out [angular-cli-ghpages](https://github.com/angular-buch/angular-cli-ghpages), a full-featured package that does all this for you and has extra functionality. > > Server configuration -------------------- This section covers changes you may have to make to the server or to files deployed on the server. ### Routed apps must fall back to `index.html` Angular applications are perfect candidates for serving with a simple static HTML server. You don't need a server-side engine to dynamically compose application pages because Angular does that on the client-side. If the application uses the Angular router, you must configure the server to return the application's host page (`index.html`) when asked for a file that it does not have. A routed application should support "deep links". A *deep link* is a URL that specifies a path to a component inside the application. For example, `http://www.mysite.com/heroes/42` is a *deep link* to the hero detail page that displays the hero with `id: 42`. There is no issue when the user navigates to that URL from within a running client. The Angular router interprets the URL and routes to that page and hero. But clicking a link in an email, entering it in the browser address bar, or merely refreshing the browser while on the hero detail page —all of these actions are handled by the browser itself, *outside* the running application. The browser makes a direct request to the server for that URL, bypassing the router. A static server routinely returns `index.html` when it receives a request for `http://www.mysite.com/`. But it rejects `http://www.mysite.com/heroes/42` and returns a `404 - Not Found` error *unless* it is configured to return `index.html` instead. #### Fallback configuration examples There is no single configuration that works for every server. The following sections describe configurations for some of the most popular servers. The list is by no means exhaustive, but should provide you with a good starting point. | Servers | Details | | --- | --- | | [Apache](https://httpd.apache.org) | Add a [rewrite rule](https://httpd.apache.org/docs/current/mod/mod_rewrite.html) to the `.htaccess` file as shown ([ngmilk.rocks/2015/03/09/angularjs-html5-mode-or-pretty-urls-on-apache-using-htaccess](https://ngmilk.rocks/2015/03/09/angularjs-html5-mode-or-pretty-urls-on-apache-using-htaccess)): ``` RewriteEngine On   # If an existing asset or directory is requested go to it as it is   RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR]   RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d   RewriteRule ^ - [L]   # If the requested resource doesn't exist, use index.html   RewriteRule ^ /index.html ``` | | [Nginx](https://nginx.org) | Use `try_files`, as described in [Front Controller Pattern Web Apps](https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/#front-controller-pattern-web-apps), modified to serve `index.html`: ``` try_files $uri $uri/ /index.html; ``` | | [Ruby](https://www.ruby-lang.org) | Create a Ruby server using ([sinatra](http://sinatrarb.com)) with a basic Ruby file that configures the server `server.rb`: ``` require 'sinatra' # Folder structure # . # -- server.rb # -- public #    |-- project-name #        |-- index.html get '/' do   folderDir = settings.public_folder + '/project-name' # ng build output folder   send_file File.join(folderDir, 'index.html') end ``` | | [IIS](https://www.iis.net) | Add a rewrite rule to `web.config`, similar to the one shown [here](https://stackoverflow.com/a/26152011): ``` <system.webServer>   <rewrite>     <rules>       <rule name="Angular Routes" stopProcessing="true">         <match url=".*" />         <conditions logicalGrouping="MatchAll">           <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />           <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />         </conditions>         <action type="Rewrite" url="/index.html" />       </rule>     </rules>   </rewrite> </system.webServer> ``` | | [GitHub Pages](https://pages.github.com) | You can't [directly configure](https://github.com/isaacs/github/issues/408) the GitHub Pages server, but you can add a 404 page. Copy `index.html` into `404.html`. It will still be served as the 404 response, but the browser will process that page and load the application properly. It's also a good idea to [serve from `docs` on main](https://docs.github.com/en/pages/getting-started-with-github-pages/configuring-a-publishing-source-for-your-github-pages-site#choosing-a-publishing-source) and to [create a `.nojekyll` file](https://www.bennadel.com/blog/3181-including-node-modules-and-vendors-folders-in-your-github-pages-site.htm) | | [Firebase hosting](https://firebase.google.com/docs/hosting) | Add a [rewrite rule](https://firebase.google.com/docs/hosting/url-redirects-rewrites#section-rewrites). ``` "rewrites": [ {   "source": "**",   "destination": "/index.html" } ] ``` | ### Configuring correct MIME-type for JavaScript assets All of your application JavaScript files must be served by the server with the [`Content-Type` header](https://developer.mozilla.org/docs/Web/HTTP/Headers/Content-Type) set to `text/javascript` or another [JavaScript-compatible MIME-type](https://developer.mozilla.org/docs/Web/HTTP/Basics_of_HTTP/MIME_types#textjavascript). Most servers and hosting services already do this by default. Server with misconfigured mime-type for JavaScript files will cause an application to fail to start with the following error: ``` Failed to load module script: The server responded with a non-JavaScript MIME type of "text/plain". Strict MIME type checking is enforced for module scripts per HTML spec. ``` If this is the case, you will need to check your server configuration and reconfigure it to serve `.js` files with `Content-Type: text/javascript`. See your server's manual for instructions on how to do this. ### Requesting services from a different server (CORS) Angular developers may encounter a [*cross-origin resource sharing*](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing "Cross-origin resource sharing") error when making a service request (typically a data service request) to a server other than the application's own host server. Browsers forbid such requests unless the server permits them explicitly. There isn't anything the client application can do about these errors. The server must be configured to accept the application's requests. Read about how to enable CORS for specific servers at [enable-cors.org](https://enable-cors.org/server.html "Enabling CORS server"). Production optimizations ------------------------ The `production` configuration engages the following build optimization features. | Features | Details | | --- | --- | | [Ahead-of-Time (AOT) Compilation](aot-compiler) | Pre-compiles Angular component templates. | | [Production mode](deployment#enable-prod-mode) | Deploys the production environment which enables *production mode*. | | Bundling | Concatenates your many application and library files into a few bundles. | | Minification | Removes excess whitespace, comments, and optional tokens. | | Uglification | Rewrites code to use short, cryptic variable and function names. | | Dead code elimination | Removes unreferenced modules and much unused code. | See [`ng build`](cli/build) for more about CLI build options and what they do. ### Enable runtime production mode In addition to build optimizations, Angular also has a runtime production mode. Angular applications run in development mode by default, as you can see by the following message on the browser console: ``` Angular is running in development mode. Call `enableProdMode()` to enable production mode. ``` *Production mode* improves application performance by disabling development-only safety checks and debugging utilities, such as the expression-changed-after-checked detection. Building your application with the production configuration automatically enables Angular's runtime production mode. ### Lazy loading You can dramatically reduce launch time by only loading the application modules that absolutely must be present when the application starts. Configure the Angular Router to defer loading of all other modules (and their associated code), either by [waiting until the app has launched](router-tutorial-toh#preloading "Preloading") or by [*lazy loading*](router#lazy-loading "Lazy loading") them on demand. If you mean to lazy-load a module, be careful not to import it in a file that's eagerly loaded when the application starts (such as the root `AppModule`). If you do that, the module will be loaded immediately. The bundling configuration must take lazy loading into consideration. Because lazy-loaded modules aren't imported in JavaScript, bundlers exclude them by default. Bundlers don't know about the router configuration and can't create separate bundles for lazy-loaded modules. You would have to create these bundles manually. The CLI runs the [Angular Ahead-of-Time Webpack Plugin](https://github.com/angular/angular-cli/tree/main/packages/ngtools/webpack) which automatically recognizes lazy-loaded `NgModules` and creates separate bundles for them. ### Measure performance You can make better decisions about what to optimize and how when you have a clear and accurate understanding of what's making the application slow. The cause may not be what you think it is. You can waste a lot of time and money optimizing something that has no tangible benefit or even makes the application slower. You should measure the application's actual behavior when running in the environments that are important to you. The [Chrome DevTools Network Performance page](https://developer.chrome.com/docs/devtools/network/reference "Chrome DevTools Network Performance") is a good place to start learning about measuring performance. The [WebPageTest](https://www.webpagetest.org) tool is another good choice that can also help verify that your deployment was successful. ### Inspect the bundles The [source-map-explorer](https://github.com/danvk/source-map-explorer/blob/master/README.md) tool is a great way to inspect the generated JavaScript bundles after a production build. Install `source-map-explorer`: ``` npm install source-map-explorer --save-dev ``` Build your application for production *including the source maps* ``` ng build --source-map ``` List the generated bundles in the `dist/project-name/` folder. ``` ls dist/project-name/*.js ``` Run the explorer to generate a graphical representation of one of the bundles. The following example displays the graph for the *main* bundle. ``` node_modules/.bin/source-map-explorer dist/project-name/main* ``` The `source-map-explorer` analyzes the source map generated with the bundle and draws a map of all dependencies, showing exactly which classes are included in the bundle. Here's the output for the *main* bundle of an example application called `cli-quickstart`. The `base` tag -------------- The HTML [`<base href="..." />`](router) specifies a base path for resolving relative URLs to assets such as images, scripts, and style sheets. For example, given the `<base href="/my/app/">`, the browser resolves a URL such as `some/place/foo.jpg` into a server request for `my/app/some/place/foo.jpg`. During navigation, the Angular router uses the *base href* as the base path to component, template, and module files. > See also the [`APP_BASE_HREF`](../api/common/app_base_href "API: APP_BASE_HREF") alternative. > > In development, you typically start the server in the folder that holds `index.html`. That's the root folder and you'd add `<base href="/">` near the top of `index.html` because `/` is the root of the application. But on the shared or production server, you might serve the application from a subfolder. For example, when the URL to load the application is something like `http://www.mysite.com/my/app`, the subfolder is `my/app/` and you should add `<base href="/my/app/">` to the server version of the `index.html`. When the `base` tag is mis-configured, the application fails to load and the browser console displays `404 - Not Found` errors for the missing files. Look at where it *tried* to find those files and adjust the base tag appropriately. The `deploy` url ---------------- A command line option used to specify the base path for resolving relative URLs for assets such as images, scripts, and style sheets at *compile* time. For example: `ng build --deploy-url /my/assets`. The effects of defining a `deploy url` and `base href` can overlap. * Both can be used for initial scripts, stylesheets, lazy scripts, and css resources. However, defining a `base href` has a few unique effects. * Defining a `base href` can be used for locating relative template (HTML) assets, and relative fetch/XMLHttpRequests. The `base href` can also be used to define the Angular router's default base (see [`APP_BASE_HREF`](../api/common/app_base_href)). Users with more complicated setups may need to manually configure the `[APP\_BASE\_HREF](../api/common/app_base_href)` token within the application (for example, application routing base is `/` but `assets/scripts/etc.` are at `/assets/`). Unlike the `base href` which can be defined in a single place, the `deploy url` needs to be hard-coded into an application at build time. This means specifying a `deploy url` will decrease build speed, but this is the unfortunate cost of using an option that embeds itself throughout an application. That is why a `base href` is generally the better option. Last reviewed on Mon Feb 28 2022
programming_docs
angular Understanding Pipes Understanding Pipes =================== Use [pipes](glossary#pipe "Definition of a pipe") to transform strings, currency amounts, dates, and other data for display. What is a pipe -------------- Pipes are simple functions to use in [template expressions](glossary#template-expression "Definition of template expression") to accept an input value and return a transformed value. Pipes are useful because you can use them throughout your application, while only declaring each pipe once. For example, you would use a pipe to show a date as **April 15, 1988** rather than the raw string format. > For the sample application used in this topic, see the . > > Built-in pipes -------------- Angular provides built-in pipes for typical data transformations, including transformations for internationalization (i18n), which use locale information to format data. The following are commonly used built-in pipes for data formatting: * [`DatePipe`](../api/common/datepipe): Formats a date value according to locale rules. * [`UpperCasePipe`](../api/common/uppercasepipe): Transforms text to all upper case. * [`LowerCasePipe`](../api/common/lowercasepipe): Transforms text to all lower case. * [`CurrencyPipe`](../api/common/currencypipe): Transforms a number to a currency string, formatted according to locale rules. * [`DecimalPipe`](../api/common/decimalpipe): Transforms a number into a string with a decimal point, formatted according to locale rules. * [`PercentPipe`](../api/common/percentpipe): Transforms a number to a percentage string, formatted according to locale rules. > * For a complete list of built-in pipes, see the [pipes API documentation](../api/common#pipes "Pipes API reference summary"). > * To learn more about using pipes for internationalization (i18n) efforts, see [formatting data based on locale](i18n-common-format-data-locale "Format data based on locale | Angular"). > > Create pipes to encapsulate custom transformations and use your custom pipes in template expressions. Pipes and precedence -------------------- The pipe operator has a higher precedence than the ternary operator (`?:`), which means `a ? b : c | x` is parsed as `a ? b : (c | x)`. The pipe operator cannot be used without parentheses in the first and second operands of `?:`. Due to precedence, if you want a pipe to apply to the result of a ternary, wrap the entire expression in parentheses; for example, `(a ? b : c) | x`. ``` <!-- use parentheses in the third operand so the pipe applies to the whole expression --> {{ (true ? 'true' : 'false') | uppercase }} ``` Last reviewed on Fri Apr 01 2022 angular Introduction to Angular animations Introduction to Angular animations ================================== Animation provides the illusion of motion: HTML elements change styling over time. Well-designed animations can make your application more fun and straightforward to use, but they aren't just cosmetic. Animations can improve your application and user experience in a number of ways: * Without animations, web page transitions can seem abrupt and jarring * Motion greatly enhances the user experience, so animations give users a chance to detect the application's response to their actions * Good animations intuitively call the user's attention to where it is needed Typically, animations involve multiple style *transformations* over time. An HTML element can move, change color, grow or shrink, fade, or slide off the page. These changes can occur simultaneously or sequentially. You can control the timing of each transformation. Angular's animation system is built on CSS functionality, which means you can animate any property that the browser considers animatable. This includes positions, sizes, transforms, colors, borders, and more. The W3C maintains a list of animatable properties on its [CSS Transitions](https://www.w3.org/TR/css-transitions-1) page. About this guide ---------------- This guide covers the basic Angular animation features to get you started on adding Angular animations to your project. The features described in this guide —and the more advanced features described in the related Angular animations guides— are demonstrated in an example application available as a live example. Prerequisites ------------- The guide assumes that you're familiar with building basic Angular apps, as described in the following sections: * [Tutorial](tutorial) * [Architecture Overview](architecture) Getting started --------------- The main Angular modules for animations are `@angular/animations` and `@angular/platform-browser`. When you create a new project using the CLI, these dependencies are automatically added to your project. To get started with adding Angular animations to your project, import the animation-specific modules along with standard Angular functionality. ### Step 1: Enabling the animations module Import `[BrowserAnimationsModule](../api/platform-browser/animations/browseranimationsmodule)`, which introduces the animation capabilities into your Angular root application module. ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule ], declarations: [ ], bootstrap: [ ] }) export class AppModule { } ``` > **NOTE**: When you use the CLI to create your app, the root application module `app.module.ts` is placed in the `src/app` folder. > > ### Step 2: Importing animation functions into component files If you plan to use specific animation functions in component files, import those functions from `@angular/animations`. ``` import { Component, HostBinding } from '@angular/core'; import { trigger, state, style, animate, transition, // ... } from '@angular/animations'; ``` > **NOTE**: See a [summary of available animation functions](animations#animation-api-summary) at the end of this guide. > > ### Step 3: Adding the animation metadata property In the component file, add a metadata property called `animations:` within the `@[Component](../api/core/component)()` decorator. You put the trigger that defines an animation within the `animations` metadata property. ``` @Component({ selector: 'app-root', templateUrl: 'app.component.html', styleUrls: ['app.component.css'], animations: [ // animation triggers go here ] }) ``` Animating a transition ---------------------- Let's animate a transition that changes a single HTML element from one state to another. For example, you can specify that a button displays either **Open** or **Closed** based on the user's last action. When the button is in the `open` state, it's visible and yellow. When it's the `closed` state, it's translucent and blue. In HTML, these attributes are set using ordinary CSS styles such as color and opacity. In Angular, use the `[style](../api/animations/style)()` function to specify a set of CSS styles for use with animations. Collect a set of styles in an animation state, and give the state a name, such as `open` or `closed`. > Let's create a new `open-close` component to animate with simple transitions. > > Run the following command in terminal to generate the component: > > > ``` > ng g component open-close > ``` > This will create the component at `src/app/open-close.component.ts`. > > ### Animation state and styles Use Angular's [`state()`](../api/animations/state) function to define different states to call at the end of each transition. This function takes two arguments: A unique name like `open` or `closed` and a `[style](../api/animations/style)()` function. Use the `[style](../api/animations/style)()` function to define a set of styles to associate with a given state name. You must use [*camelCase*](glossary#case-conventions) for style attributes that contain dashes, such as `backgroundColor` or wrap them in quotes, such as `'background-color'`. Let's see how Angular's [`state()`](../api/animations/state) function works with the `[style](../api/animations/style)⁣­(⁠)` function to set CSS style attributes. In this code snippet, multiple style attributes are set at the same time for the state. In the `open` state, the button has a height of 200 pixels, an opacity of 1, and a yellow background color. ``` // ... state('open', style({ height: '200px', opacity: 1, backgroundColor: 'yellow' })), ``` In the following `closed` state, the button has a height of 100 pixels, an opacity of 0.8, and a background color of blue. ``` state('closed', style({ height: '100px', opacity: 0.8, backgroundColor: 'blue' })), ``` ### Transitions and timing In Angular, you can set multiple styles without any animation. However, without further refinement, the button instantly transforms with no fade, no shrinkage, or other visible indicator that a change is occurring. To make the change less abrupt, you need to define an animation *transition* to specify the changes that occur between one state and another over a period of time. The `[transition](../api/animations/transition)()` function accepts two arguments: The first argument accepts an expression that defines the direction between two transition states, and the second argument accepts one or a series of `[animate](../api/animations/animate)()` steps. Use the `[animate](../api/animations/animate)()` function to define the length, delay, and easing of a transition, and to designate the style function for defining styles while transitions are taking place. Use the `[animate](../api/animations/animate)()` function to define the `[keyframes](../api/animations/keyframes)()` function for multi-step animations. These definitions are placed in the second argument of the `[animate](../api/animations/animate)()` function. #### Animation metadata: duration, delay, and easing The `[animate](../api/animations/animate)()` function (second argument of the transition function) accepts the `timings` and `styles` input parameters. The `timings` parameter takes either a number or a string defined in three parts. ``` animate (duration) ``` or ``` animate ('duration delay easing') ``` The first part, `duration`, is required. The duration can be expressed in milliseconds as a number without quotes, or in seconds with quotes and a time specifier. For example, a duration of a tenth of a second can be expressed as follows: * As a plain number, in milliseconds: `100` * In a string, as milliseconds: `'100ms'` * In a string, as seconds: `'0.1s'` The second argument, `delay`, has the same syntax as `duration`. For example: * Wait for 100ms and then run for 200ms: `'0.2s 100ms'` The third argument, `easing`, controls how the animation [accelerates and decelerates](https://easings.net) during its runtime. For example, `ease-in` causes the animation to begin slowly, and to pick up speed as it progresses. * Wait for 100ms, run for 200ms. Use a deceleration curve to start out fast and slowly decelerate to a resting point: `'0.2s 100ms ease-out'` * Run for 200ms, with no delay. Use a standard curve to start slow, accelerate in the middle, and then decelerate slowly at the end: `'0.2s ease-in-out'` * Start immediately, run for 200ms. Use an acceleration curve to start slow and end at full velocity: `'0.2s ease-in'` > **NOTE**: See the Material Design website's topic on [Natural easing curves](https://material.io/design/motion/speed.html#easing) for general information on easing curves. > > This example provides a state transition from `open` to `closed` with a 1-second transition between states. ``` transition('open => closed', [ animate('1s') ]), ``` In the preceding code snippet, the `=>` operator indicates unidirectional transitions, and `<=>` is bidirectional. Within the transition, `[animate](../api/animations/animate)()` specifies how long the transition takes. In this case, the state change from `open` to `closed` takes 1 second, expressed here as `1s`. This example adds a state transition from the `closed` state to the `open` state with a 0.5-second transition animation arc. ``` transition('closed => open', [ animate('0.5s') ]), ``` > **NOTE**: Some additional notes on using styles within [`state`](../api/animations/state) and `[transition](../api/animations/transition)` functions. > > * Use [`state()`](../api/animations/state) to define styles that are applied at the end of each transition, they persist after the animation completes > * Use `[transition](../api/animations/transition)()` to define intermediate styles, which create the illusion of motion during the animation > * When animations are disabled, `[transition](../api/animations/transition)()` styles can be skipped, but [`state()`](../api/animations/state) styles can't > * Include multiple state pairs within the same `[transition](../api/animations/transition)()` argument: > > > ``` > transition( 'on => off, off => void' ) > ``` > > ### Triggering the animation An animation requires a *trigger*, so that it knows when to start. The `[trigger](../api/animations/trigger)()` function collects the states and transitions, and gives the animation a name, so that you can attach it to the triggering element in the HTML template. The `[trigger](../api/animations/trigger)()` function describes the property name to watch for changes. When a change occurs, the trigger initiates the actions included in its definition. These actions can be transitions or other functions, as we'll see later on. In this example, we'll name the trigger `openClose`, and attach it to the `button` element. The trigger describes the open and closed states, and the timings for the two transitions. > **NOTE**: Within each `[trigger](../api/animations/trigger)()` function call, an element can only be in one state at any given time. However, it's possible for multiple triggers to be active at once. > > ### Defining animations and attaching them to the HTML template Animations are defined in the metadata of the component that controls the HTML element to be animated. Put the code that defines your animations under the `animations:` property within the `@[Component](../api/core/component)()` decorator. ``` @Component({ selector: 'app-open-close', animations: [ trigger('openClose', [ // ... state('open', style({ height: '200px', opacity: 1, backgroundColor: 'yellow' })), state('closed', style({ height: '100px', opacity: 0.8, backgroundColor: 'blue' })), transition('open => closed', [ animate('1s') ]), transition('closed => open', [ animate('0.5s') ]), ]), ], templateUrl: 'open-close.component.html', styleUrls: ['open-close.component.css'] }) export class OpenCloseComponent { isOpen = true; toggle() { this.isOpen = !this.isOpen; } } ``` When you've defined an animation trigger for a component, attach it to an element in that component's template by wrapping the trigger name in brackets and preceding it with an `@` symbol. Then, you can bind the trigger to a template expression using standard Angular property binding syntax as shown below, where `triggerName` is the name of the trigger, and `expression` evaluates to a defined animation state. ``` <div [@triggerName]="expression">…</div>; ``` The animation is executed or triggered when the expression value changes to a new state. The following code snippet binds the trigger to the value of the `isOpen` property. ``` <nav> <button type="button" (click)="toggle()">Toggle Open/Close</button> </nav> <div [@openClose]="isOpen ? 'open' : 'closed'" class="open-close-container"> <p>The box is now {{ isOpen ? 'Open' : 'Closed' }}!</p> </div> ``` In this example, when the `isOpen` expression evaluates to a defined state of `open` or `closed`, it notifies the trigger `openClose` of a state change. Then it's up to the `openClose` code to handle the state change and kick off a state change animation. For elements entering or leaving a page (inserted or removed from the DOM), you can make the animations conditional. For example, use `*[ngIf](../api/common/ngif)` with the animation trigger in the HTML template. > **NOTE**: In the component file, set the trigger that defines the animations as the value of the `animations:` property in the `@[Component](../api/core/component)()` decorator. > > In the HTML template file, use the trigger name to attach the defined animations to the HTML element to be animated. > > ### Code review Here are the code files discussed in the transition example. ``` @Component({ selector: 'app-open-close', animations: [ trigger('openClose', [ // ... state('open', style({ height: '200px', opacity: 1, backgroundColor: 'yellow' })), state('closed', style({ height: '100px', opacity: 0.8, backgroundColor: 'blue' })), transition('open => closed', [ animate('1s') ]), transition('closed => open', [ animate('0.5s') ]), ]), ], templateUrl: 'open-close.component.html', styleUrls: ['open-close.component.css'] }) export class OpenCloseComponent { isOpen = true; toggle() { this.isOpen = !this.isOpen; } } ``` ``` <nav> <button type="button" (click)="toggle()">Toggle Open/Close</button> </nav> <div [@openClose]="isOpen ? 'open' : 'closed'" class="open-close-container"> <p>The box is now {{ isOpen ? 'Open' : 'Closed' }}!</p> </div> ``` ``` :host { display: block; margin-top: 1rem; } .open-close-container { border: 1px solid #dddddd; margin-top: 1em; padding: 20px 20px 0px 20px; color: #000000; font-weight: bold; font-size: 20px; } ``` ### Summary You learned to add animation to a transition between two states, using `[style](../api/animations/style)()` and [`state()`](../api/animations/state) along with `[animate](../api/animations/animate)()` for the timing. Learn about more advanced features in Angular animations under the Animation section, beginning with advanced techniques in [transition and triggers](transition-and-triggers). Animations API summary ---------------------- The functional API provided by the `@angular/animations` module provides a domain-specific language (DSL) for creating and controlling animations in Angular applications. See the [API reference](../api/animations) for a complete listing and syntax details of the core functions and related data structures. | Function name | What it does | | --- | --- | | `[trigger](../api/animations/trigger)()` | Kicks off the animation and serves as a container for all other animation function calls. HTML template binds to `triggerName`. Use the first argument to declare a unique trigger name. Uses array syntax. | | `[style](../api/animations/style)()` | Defines one or more CSS styles to use in animations. Controls the visual appearance of HTML elements during animations. Uses object syntax. | | [`state()`](../api/animations/state) | Creates a named set of CSS styles that should be applied on successful transition to a given state. The state can then be referenced by name within other animation functions. | | `[animate](../api/animations/animate)()` | Specifies the timing information for a transition. Optional values for `delay` and `easing`. Can contain `[style](../api/animations/style)()` calls within. | | `[transition](../api/animations/transition)()` | Defines the animation sequence between two named states. Uses array syntax. | | `[keyframes](../api/animations/keyframes)()` | Allows a sequential change between styles within a specified time interval. Use within `[animate](../api/animations/animate)()`. Can include multiple `[style](../api/animations/style)()` calls within each `keyframe()`. Uses array syntax. | | [`group()`](../api/animations/group) | Specifies a group of animation steps (*inner animations*) to be run in parallel. Animation continues only after all inner animation steps have completed. Used within `[sequence](../api/animations/sequence)()` or `[transition](../api/animations/transition)()`. | | `[query](../api/animations/query)()` | Finds one or more inner HTML elements within the current element. | | `[sequence](../api/animations/sequence)()` | Specifies a list of animation steps that are run sequentially, one by one. | | `[stagger](../api/animations/stagger)()` | Staggers the starting time for animations for multiple elements. | | `[animation](../api/animations/animation)()` | Produces a reusable animation that can be invoked from elsewhere. Used together with `[useAnimation](../api/animations/useanimation)()`. | | `[useAnimation](../api/animations/useanimation)()` | Activates a reusable animation. Used with `[animation](../api/animations/animation)()`. | | `[animateChild](../api/animations/animatechild)()` | Allows animations on child components to be run within the same timeframe as the parent. | More on Angular animations -------------------------- You might also be interested in the following: * [Transition and triggers](transition-and-triggers) * [Complex animation sequences](complex-animation-sequences) * [Reusable animations](reusable-animations) * [Route transition animations](route-animations) > Check out this [presentation](https://www.youtube.com/watch?v=rnTK9meY5us), shown at the AngularConnect conference in November 2017, and the accompanying [source code](https://github.com/matsko/animationsftw.in). > > Last reviewed on Mon Feb 28 2022
programming_docs
angular Set the runtime locale manually Set the runtime locale manually =============================== The initial installation of Angular already contains locale data for English in the United States (`en-US`). The [Angular CLI](cli "CLI Overview and Command Reference | Angular") automatically includes the locale data and sets the `[LOCALE\_ID](../api/core/locale_id)` value when you use the `--localize` option with [`ng build`](cli/build "ng build | CLI | Angular") command. To manually set the runtime locale of an application to one other than the automatic value, complete the following actions. 1. Search for the Unicode locale ID in the language-locale combination in the [`@angular/common/locales/`](https://unpkg.com/browse/@angular/common/locales/ "@angular/common/locales/ | Unpkg") directory. 2. Set the [`LOCALE_ID`](../api/core/locale_id "LOCALE_ID | Core - API | Angular") token. The following example sets the value of `[LOCALE\_ID](../api/core/locale_id)` to `fr` for French. ``` import { LOCALE_ID, NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from '../src/app/app.component'; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent ], providers: [ { provide: LOCALE_ID, useValue: 'fr' } ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` Last reviewed on Mon Feb 28 2022 angular Component styles Component styles ================ Angular applications are styled with standard CSS. That means you can apply everything you know about CSS stylesheets, selectors, rules, and media queries directly to Angular applications. Additionally, Angular can bundle *component styles* with components, enabling a more modular design than regular stylesheets. This page describes how to load and apply these component styles. Run the live example in Stackblitz and download the code from there. Using component styles ---------------------- For every Angular component you write, you can define not only an HTML template, but also the CSS styles that go with that template, specifying any selectors, rules, and media queries that you need. One way to do this is to set the `styles` property in the component metadata. The `styles` property takes an array of strings that contain CSS code. Usually you give it one string, as in the following example: ``` @Component({ selector: 'app-root', template: ` <h1>Tour of Heroes</h1> <app-hero-main [hero]="hero"></app-hero-main> `, styles: ['h1 { font-weight: normal; }'] }) export class HeroAppComponent { /* . . . */ } ``` Component styling best practices -------------------------------- > See [View Encapsulation](view-encapsulation) for information on how Angular scopes styles to specific components. > > You should consider the styles of a component to be private implementation details for that component. When consuming a common component, you should not override the component's styles any more than you should access the private members of a TypeScript class. While Angular's default style encapsulation prevents component styles from affecting other components, global styles affect all components on the page. This includes `::ng-deep`, which promotes a component style to a global style. ### Authoring a component to support customization As component author, you can explicitly design a component to accept customization in one of four different ways. #### 1. Use CSS Custom Properties (recommended) You can define a supported customization API for your component by defining its styles with CSS Custom Properties, alternatively known as CSS Variables. Anyone using your component can consume this API by defining values for these properties, customizing the final appearance of the component on the rendered page. While this requires defining a custom property for each customization point, it creates a clear API contract that works in all style encapsulation modes. #### 2. Declare global CSS with `@mixin` While Angular's emulated style encapsulation prevents styles from escaping a component, it does not prevent global CSS from affecting the entire page. While component consumers should avoid directly overwriting the CSS internals of a component, you can offer a supported customization API via a CSS preprocessor like Sass. For example, a component may offer one or more supported mixins to customize various aspects of the component's appearance. While this approach uses global styles in its implementation, it allows the component author to keep the mixins up to date with changes to the component's private DOM structure and CSS classes. #### 3. Customize with CSS `::part` If your component uses [Shadow DOM](https://developer.mozilla.org/docs/Web/Web_Components/Using_shadow_DOM), you can apply the `part` attribute to specify elements in your component's template. This allows consumers of the component to author arbitrary styles targeting those specific elements with [the `::part` pseudo-element](https://developer.mozilla.org/docs/Web/CSS/::part). While this lets you limit the elements within your template that consumers can customize, it does not limit which CSS properties are customizable. #### 4. Provide a TypeScript API You can define a TypeScript API for customizing styles, using template bindings to update CSS classes and styles. This is not recommended because the additional JavaScript cost of this style API incurs far more performance cost than CSS. Special selectors ----------------- Component styles have a few special *selectors* from the world of shadow DOM style scoping (described in the [CSS Scoping Module Level 1](https://www.w3.org/TR/css-scoping-1) page on the [W3C](https://www.w3.org) site). The following sections describe these selectors. ### :host Every component is associated within an element that matches the component's selector. This element, into which the template is rendered, is called the *host element*. The `:host` pseudo-class selector may be used to create styles that target the host element itself, as opposed to targeting elements inside the host. ``` @Component({ selector: 'app-main', template: ` <h1>It Works!</h1> <div> Start editing to see some magic happen :) </div> ` }) export class HostSelectorExampleComponent { } ``` Creating the following style will target the component's host element. Any rule applied to this selector will affect the host element and all its descendants (in this case, italicizing all contained text). ``` :host { font-style: italic; } ``` The `:host` selector only targets the host element of a component. Any styles within the `:host` block of a child component will *not* affect parent components. Use the *function form* to apply host styles conditionally by including another selector inside parentheses after `:host`. In this example the host's content also becomes bold when the `active` CSS class is applied to the host element. ``` :host { font-style: italic; } :host(.active) { font-weight: bold; } ``` The `:host` selector can also be combined with other selectors. Add selectors behind the `:host` to select child elements, for example using `:host h2` to target all `<h2>` elements inside a component's view. > You should not add selectors (other than `:host-context`) in front of the `:host` selector to style a component based on the outer context of the component's view. Such selectors are not scoped to a component's view and will select the outer context, but it's not built-in behavior. Use `:host-context` selector for that purpose instead. > > ### :host-context Sometimes it's useful to apply styles to elements within a component's template based on some condition in an element that is an ancestor of the host element. For example, a CSS theme class could be applied to the document `<body>` element, and you want to change how your component looks based on that. Use the `:host-context()` pseudo-class selector, which works just like the function form of `:host()`. The `:host-context()` selector looks for a CSS class in any ancestor of the component host element, up to the document root. The `:host-context()` selector is only useful when combined with another selector. The following example italicizes all text inside a component, but only if some *ancestor* element of the host element has the CSS class `active`. ``` :host-context(.active) { font-style: italic; } ``` > **NOTE**: Only the host element and its descendants will be affected, not the ancestor with the assigned `active` class. > > ### (deprecated) `/deep/`, `>>>`, and `::ng-deep` Component styles normally apply only to the HTML in the component's own template. Applying the `::ng-deep` pseudo-class to any CSS rule completely disables view-encapsulation for that rule. Any style with `::ng-deep` applied becomes a global style. In order to scope the specified style to the current component and all its descendants, be sure to include the `:host` selector before `::ng-deep`. If the `::ng-deep` combinator is used without the `:host` pseudo-class selector, the style can bleed into other components. The following example targets all `<h3>` elements, from the host element down through this component to all of its child elements in the DOM. ``` :host ::ng-deep h3 { font-style: italic; } ``` The `/deep/` combinator also has the aliases `>>>`, and `::ng-deep`. > Use `/deep/`, `>>>`, and `::ng-deep` only with *emulated* view encapsulation. Emulated is the default and most commonly used view encapsulation. For more information, see the [View Encapsulation](view-encapsulation) section. > > > The shadow-piercing descendant combinator is deprecated and [support is being removed from major browsers](https://www.chromestatus.com/feature/6750456638341120) and tools. As such we plan to drop support in Angular (for all 3 of `/deep/`, `>>>`, and `::ng-deep`). Until then `::ng-deep` should be preferred for a broader compatibility with the tools. > > Loading component styles ------------------------ There are several ways to add styles to a component: * By setting `styles` or `styleUrls` metadata * Inline in the template HTML * With CSS imports The scoping rules outlined earlier apply to each of these loading patterns. ### Styles in component metadata Add a `styles` array property to the `@[Component](../api/core/component)` decorator. Each string in the array defines some CSS for this component. ``` @Component({ selector: 'app-root', template: ` <h1>Tour of Heroes</h1> <app-hero-main [hero]="hero"></app-hero-main> `, styles: ['h1 { font-weight: normal; }'] }) export class HeroAppComponent { /* . . . */ } ``` > Reminder: These styles apply *only to this component*. They are *not inherited* by any components nested within the template nor by any content projected into the component. > > The Angular CLI command [`ng generate component`](cli/generate) defines an empty `styles` array when you create the component with the `--inline-style` flag. ``` ng generate component hero-app --inline-style ``` ### Style files in component metadata Load styles from external CSS files by adding a `styleUrls` property to a component's `@[Component](../api/core/component)` decorator: ``` @Component({ selector: 'app-root', template: ` <h1>Tour of Heroes</h1> <app-hero-main [hero]="hero"></app-hero-main> `, styleUrls: ['./hero-app.component.css'] }) export class HeroAppComponent { /* . . . */ } ``` ``` h1 { font-weight: normal; } ``` > Reminder: the styles in the style file apply *only to this component*. They are *not inherited* by any components nested within the template nor by any content projected into the component. > > > You can specify more than one styles file or even a combination of `styles` and `styleUrls`. > > When you use the Angular CLI command [`ng generate component`](cli/generate) without the `--inline-style` flag, it creates an empty styles file for you and references that file in the component's generated `styleUrls`. ``` ng generate component hero-app ``` ### Template inline styles Embed CSS styles directly into the HTML template by putting them inside `<[style](../api/animations/style)>` tags. ``` @Component({ selector: 'app-hero-controls', template: ` <style> button { background-color: white; border: 1px solid #777; } </style> <h3>Controls</h3> <button type="button" (click)="activate()">Activate</button> ` }) ``` ### Template link tags You can also write `<link>` tags into the component's HTML template. ``` @Component({ selector: 'app-hero-team', template: ` <!-- We must use a relative URL so that the AOT compiler can find the stylesheet --> <link rel="stylesheet" href="../assets/hero-team.component.css"> <h3>Team</h3> <ul> <li *ngFor="let member of hero.team"> {{member}} </li> </ul>` }) ``` > When building with the CLI, be sure to include the linked style file among the assets to be copied to the server as described in the [Assets configuration guide](workspace-config#assets-configuration). > > Once included, the CLI includes the stylesheet, whether the link tag's href URL is relative to the application root or the component file. > > ### CSS @imports Import CSS files into the CSS files using the standard CSS `@import` rule. For details, see [`@import`](https://developer.mozilla.org/en/docs/Web/CSS/@import) on the [MDN](https://developer.mozilla.org) site. In this case, the URL is relative to the CSS file into which you're importing. ``` /* The AOT compiler needs the `./` to show that this is local */ @import './hero-details-box.css'; ``` ### External and global style files When building with the CLI, you must configure the `angular.json` to include *all external assets*, including external style files. Register **global** style files in the `styles` section which, by default, is pre-configured with the global `styles.css` file. See the [Styles configuration guide](workspace-config#styles-and-scripts-configuration) to learn more. ### Non-CSS style files If you're building with the CLI, you can write style files in [sass](https://sass-lang.com), or [less](https://lesscss.org), and specify those files in the `@[Component.styleUrls](../api/core/component#styleUrls)` metadata with the appropriate extensions (`.scss`, `.less`) as in the following example: ``` @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'] }) … ``` The CLI build process runs the pertinent CSS preprocessor. When generating a component file with `ng generate component`, the CLI emits an empty CSS styles file (`.css`) by default. Configure the CLI to default to your preferred CSS preprocessor as explained in the [Workspace configuration guide](workspace-config#generation-schematics). Last reviewed on Mon Feb 28 2022 angular Schematics for libraries Schematics for libraries ======================== When you create an Angular library, you can provide and package it with schematics that integrate it with the Angular CLI. With your schematics, your users can use `ng add` to install an initial version of your library, `ng generate` to create artifacts defined in your library, and `ng update` to adjust their project for a new version of your library that introduces breaking changes. All three types of schematics can be part of a collection that you package with your library. Download the library schematics project for a completed example of the following steps. Creating a schematics collection -------------------------------- To start a collection, you need to create the schematic files. The following steps show you how to add initial support without modifying any project files. 1. In your library's root folder, create a `schematics` folder. 2. In the `schematics/` folder, create an `ng-add` folder for your first schematic. 3. At the root level of the `schematics` folder, create a `collection.json` file. 4. Edit the `collection.json` file to define the initial schema for your collection. ``` { "$schema": "../../../node_modules/@angular-devkit/schematics/collection-schema.json", "schematics": { "ng-add": { "description": "Add my library to the project.", "factory": "./ng-add/index#ngAdd" } } } ``` * The `$schema` path is relative to the Angular Devkit collection schema. * The `schematics` object describes the named schematics that are part of this collection. * The first entry is for a schematic named `ng-add`. It contains the description, and points to the factory function that is called when your schematic is executed. 5. In your library project's `package.json` file, add a "schematics" entry with the path to your schema file. The Angular CLI uses this entry to find named schematics in your collection when it runs commands. ``` { "name": "my-lib", "version": "0.0.1", "schematics": "./schematics/collection.json", } ``` The initial schema that you have created tells the CLI where to find the schematic that supports the `ng add` command. Now you are ready to create that schematic. Providing installation support ------------------------------ A schematic for the `ng add` command can enhance the initial installation process for your users. The following steps define this type of schematic. 1. Go to the `<lib-root>/schematics/ng-add` folder. 2. Create the main file, `index.ts`. 3. Open `index.ts` and add the source code for your schematic factory function. ``` import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; import { NodePackageInstallTask } from '@angular-devkit/schematics/tasks'; // Just return the tree export function ngAdd(): Rule { return (tree: Tree, context: SchematicContext) => { context.addTask(new NodePackageInstallTask()); return tree; }; } ``` The only step needed to provide initial `ng add` support is to trigger an installation task using the `SchematicContext`. The task uses the user's preferred package manager to add the library to the project's `package.json` configuration file, and install it in the project's `node_modules` directory. In this example, the function receives the current `Tree` and returns it without any modifications. If you need to, do additional setup when your package is installed, such as generating files, updating configuration, or any other initial setup your library requires. ### Define dependency type Use the `save` option of `ng-add` to configure if the library should be added to the `dependencies`, the `devDependencies`, or not saved at all in the project's `package.json` configuration file. ``` "ng-add": { "save": "devDependencies" }, ``` Possible values are: | Values | Details | | --- | --- | | `false` | Don't add the package to `package.json` | | `true` | Add the package to the dependencies | | `"dependencies"` | Add the package to the dependencies | | `"devDependencies"` | Add the package to the devDependencies | Building your schematics ------------------------ To bundle your schematics together with your library, you must configure the library to build the schematics separately, then add them to the bundle. You must build your schematics *after* you build your library, so they are placed in the correct directory. * Your library needs a custom Typescript configuration file with instructions on how to compile your schematics into your distributed library * To add the schematics to the library bundle, add scripts to the library's `package.json` file Assume you have a library project `my-lib` in your Angular workspace. To tell the library how to build the schematics, add a `tsconfig.schematics.json` file next to the generated `tsconfig.lib.json` file that configures the library build. 1. Edit the `tsconfig.schematics.json` file to add the following content. ``` { "compilerOptions": { "baseUrl": ".", "lib": [ "es2018", "dom" ], "declaration": true, "module": "commonjs", "moduleResolution": "node", "noEmitOnError": true, "noFallthroughCasesInSwitch": true, "noImplicitAny": true, "noImplicitThis": true, "noUnusedParameters": true, "noUnusedLocals": true, "rootDir": "schematics", "outDir": "../../dist/my-lib/schematics", "skipDefaultLibCheck": true, "skipLibCheck": true, "sourceMap": true, "strictNullChecks": true, "target": "es6", "types": [ "jasmine", "node" ] }, "include": [ "schematics/**/*" ], "exclude": [ "schematics/*/files/**/*" ] } ``` | Options | Details | | --- | --- | | `rootDir` | Specifies that your `schematics` folder contains the input files to be compiled. | | `outDir` | Maps to the library's output folder. By default, this is the `dist/my-lib` folder at the root of your workspace. | 2. To make sure your schematics source files get compiled into the library bundle, add the following scripts to the `package.json` file in your library project's root folder (`projects/my-lib`). ``` { "name": "my-lib", "version": "0.0.1", "scripts": { "build": "tsc -p tsconfig.schematics.json", "postbuild": "copyfiles schematics/*/schema.json schematics/*/files/** schematics/collection.json ../../dist/my-lib/" }, "peerDependencies": { "@angular/common": "^7.2.0", "@angular/core": "^7.2.0" }, "schematics": "./schematics/collection.json", "ng-add": { "save": "devDependencies" }, "devDependencies": { "copyfiles": "file:../../node_modules/copyfiles", "typescript": "file:../../node_modules/typescript" } } ``` * The `build` script compiles your schematic using the custom `tsconfig.schematics.json` file * The `postbuild` script copies the schematic files after the `build` script completes * Both the `build` and the `postbuild` scripts require the `copyfiles` and `typescript` dependencies. To install the dependencies, navigate to the path defined in `devDependencies` and run `npm install` before you run the scripts. Providing generation support ---------------------------- You can add a named schematic to your collection that lets your users use the `ng generate` command to create an artifact that is defined in your library. We'll assume that your library defines a service, `my-service`, that requires some setup. You want your users to be able to generate it using the following CLI command. ``` ng generate my-lib:my-service ``` To begin, create a new subfolder, `my-service`, in the `schematics` folder. ### Configure the new schematic When you add a schematic to the collection, you have to point to it in the collection's schema, and provide configuration files to define options that a user can pass to the command. 1. Edit the `schematics/collection.json` file to point to the new schematic subfolder, and include a pointer to a schema file that specifies inputs for the new schematic. ``` { "$schema": "../../../node_modules/@angular-devkit/schematics/collection-schema.json", "schematics": { "ng-add": { "description": "Add my library to the project.", "factory": "./ng-add/index#ngAdd" }, "my-service": { "description": "Generate a service in the project.", "factory": "./my-service/index#myService", "schema": "./my-service/schema.json" } } } ``` 2. Go to the `<lib-root>/schematics/my-service` folder. 3. Create a `schema.json` file and define the available options for the schematic. ``` { "$schema": "http://json-schema.org/schema", "$id": "SchematicsMyService", "title": "My Service Schema", "type": "object", "properties": { "name": { "description": "The name of the service.", "type": "string" }, "path": { "type": "string", "format": "path", "description": "The path to create the service.", "visible": false }, "project": { "type": "string", "description": "The name of the project.", "$default": { "$source": "projectName" } } }, "required": [ "name" ] } ``` * *id*: A unique ID for the schema in the collection. * *title*: A human-readable description of the schema. * *type*: A descriptor for the type provided by the properties. * *properties*: An object that defines the available options for the schematic.Each option associates key with a type, description, and optional alias. The type defines the shape of the value you expect, and the description is displayed when the user requests usage help for your schematic. See the workspace schema for additional customizations for schematic options. 4. Create a `schema.ts` file and define an interface that stores the values of the options defined in the `schema.json` file. ``` export interface Schema { // The name of the service. name: string; // The path to create the service. path?: string; // The name of the project. project?: string; } ``` | Options | Details | | --- | --- | | name | The name you want to provide for the created service. | | path | Overrides the path provided to the schematic. The default path value is based on the current working directory. | | project | Provides a specific project to run the schematic on. In the schematic, you can provide a default if the option is not provided by the user. | ### Add template files To add artifacts to a project, your schematic needs its own template files. Schematic templates support special syntax to execute code and variable substitution. 1. Create a `files/` folder inside the `schematics/my-service/` folder. 2. Create a file named `__name@dasherize__.service.ts.template` that defines a template to use for generating files. This template will generate a service that already has Angular's `[HttpClient](../api/common/http/httpclient)` injected into its constructor. ``` import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable({ providedIn: 'root' }) export class <%= classify(name) %>Service { constructor(private http: HttpClient) { } } ``` * The `classify` and `dasherize` methods are utility functions that your schematic uses to transform your source template and filename. * The `name` is provided as a property from your factory function. It is the same `name` you defined in the schema. ### Add the factory function Now that you have the infrastructure in place, you can define the main function that performs the modifications you need in the user's project. The Schematics framework provides a file templating system, which supports both path and content templates. The system operates on placeholders defined inside files or paths that loaded in the input `Tree`. It fills these in using values passed into the `Rule`. For details of these data structures and syntax, see the [Schematics README](https://github.com/angular/angular-cli/blob/main/packages/angular_devkit/schematics/README.md). 1. Create the main file `index.ts` and add the source code for your schematic factory function. 2. First, import the schematics definitions you will need. The Schematics framework offers many utility functions to create and use rules when running a schematic. ``` import { Rule, Tree, SchematicsException, apply, url, applyTemplates, move, chain, mergeWith } from '@angular-devkit/schematics'; import { strings, normalize, virtualFs, workspaces } from '@angular-devkit/core'; ``` 3. Import the defined schema interface that provides the type information for your schematic's options. ``` import { Rule, Tree, SchematicsException, apply, url, applyTemplates, move, chain, mergeWith } from '@angular-devkit/schematics'; import { strings, normalize, virtualFs, workspaces } from '@angular-devkit/core'; import { Schema as MyServiceSchema } from './schema'; ``` 4. To build up the generation schematic, start with an empty rule factory. ``` export function myService(options: MyServiceSchema): Rule { return (tree: Tree) => tree; } ``` This rule factory returns the tree without modification. The options are the option values passed through from the `ng generate` command. Define a generation rule ------------------------ You now have the framework in place for creating the code that actually modifies the user's application to set it up for the service defined in your library. The Angular workspace where the user installed your library contains multiple projects (applications and libraries). The user can specify the project on the command line, or let it default. In either case, your code needs to identify the specific project to which this schematic is being applied, so that you can retrieve information from the project configuration. Do this using the `Tree` object that is passed in to the factory function. The `Tree` methods give you access to the complete file tree in your workspace, letting you read and write files during the execution of the schematic. ### Get the project configuration 1. To determine the destination project, use the `workspaces.readWorkspace` method to read the contents of the workspace configuration file, `angular.json`. To use `workspaces.readWorkspace` you need to create a `workspaces.WorkspaceHost` from the `Tree`. Add the following code to your factory function. ``` import { Rule, Tree, SchematicsException, apply, url, applyTemplates, move, chain, mergeWith } from '@angular-devkit/schematics'; import { strings, normalize, virtualFs, workspaces } from '@angular-devkit/core'; import { Schema as MyServiceSchema } from './schema'; function createHost(tree: Tree): workspaces.WorkspaceHost { return { async readFile(path: string): Promise<string> { const data = tree.read(path); if (!data) { throw new SchematicsException('File not found.'); } return virtualFs.fileBufferToString(data); }, async writeFile(path: string, data: string): Promise<void> { return tree.overwrite(path, data); }, async isDirectory(path: string): Promise<boolean> { return !tree.exists(path) && tree.getDir(path).subfiles.length > 0; }, async isFile(path: string): Promise<boolean> { return tree.exists(path); }, }; } export function myService(options: MyServiceSchema): Rule { return async (tree: Tree) => { const host = createHost(tree); const { workspace } = await workspaces.readWorkspace('/', host); }; } ``` Be sure to check that the context exists and throw the appropriate error. 2. Now that you have the project name, use it to retrieve the project-specific configuration information. ``` const project = (options.project != null) ? workspace.projects.get(options.project) : null; if (!project) { throw new SchematicsException(`Invalid project name: ${options.project}`); } const projectType = project.extensions.projectType === 'application' ? 'app' : 'lib'; ``` The `workspace.projects` object contains all the project-specific configuration information. 3. The `options.path` determines where the schematic template files are moved to once the schematic is applied. The `path` option in the schematic's schema is substituted by default with the current working directory. If the `path` is not defined, use the `sourceRoot` from the project configuration along with the `projectType`. ``` if (options.path === undefined) { options.path = `${project.sourceRoot}/${projectType}`; } ``` ### Define the rule A `Rule` can use external template files, transform them, and return another `Rule` object with the transformed template. Use the templating to generate any custom files required for your schematic. 1. Add the following code to your factory function. ``` const templateSource = apply(url('./files'), [ applyTemplates({ classify: strings.classify, dasherize: strings.dasherize, name: options.name }), move(normalize(options.path as string)) ]); ``` | Methods | Details | | --- | --- | | `apply()` | Applies multiple rules to a source and returns the transformed source. It takes 2 arguments, a source and an array of rules. | | `url()` | Reads source files from your filesystem, relative to the schematic. | | `applyTemplates()` | Receives an argument of methods and properties you want make available to the schematic template and the schematic filenames. It returns a `Rule`. This is where you define the `classify()` and `dasherize()` methods, and the `name` property. | | `classify()` | Takes a value and returns the value in title case. For example, if the provided name is `my service`, it is returned as `MyService`. | | `dasherize()` | Takes a value and returns the value in dashed and lowercase. For example, if the provided name is MyService, it is returned as `my-service`. | | `move()` | Moves the provided source files to their destination when the schematic is applied. | 2. Finally, the rule factory must return a rule. ``` return chain([ mergeWith(templateSource) ]); ``` The `chain()` method lets you combine multiple rules into a single rule, so that you can perform multiple operations in a single schematic. Here you are only merging the template rules with any code executed by the schematic. See a complete example of the following schematic rule function. ``` import { Rule, Tree, SchematicsException, apply, url, applyTemplates, move, chain, mergeWith } from '@angular-devkit/schematics'; import { strings, normalize, virtualFs, workspaces } from '@angular-devkit/core'; import { Schema as MyServiceSchema } from './schema'; function createHost(tree: Tree): workspaces.WorkspaceHost { return { async readFile(path: string): Promise<string> { const data = tree.read(path); if (!data) { throw new SchematicsException('File not found.'); } return virtualFs.fileBufferToString(data); }, async writeFile(path: string, data: string): Promise<void> { return tree.overwrite(path, data); }, async isDirectory(path: string): Promise<boolean> { return !tree.exists(path) && tree.getDir(path).subfiles.length > 0; }, async isFile(path: string): Promise<boolean> { return tree.exists(path); }, }; } export function myService(options: MyServiceSchema): Rule { return async (tree: Tree) => { const host = createHost(tree); const { workspace } = await workspaces.readWorkspace('/', host); const project = (options.project != null) ? workspace.projects.get(options.project) : null; if (!project) { throw new SchematicsException(`Invalid project name: ${options.project}`); } const projectType = project.extensions.projectType === 'application' ? 'app' : 'lib'; if (options.path === undefined) { options.path = `${project.sourceRoot}/${projectType}`; } const templateSource = apply(url('./files'), [ applyTemplates({ classify: strings.classify, dasherize: strings.dasherize, name: options.name }), move(normalize(options.path as string)) ]); return chain([ mergeWith(templateSource) ]); }; } ``` For more information about rules and utility methods, see [Provided Rules](https://github.com/angular/angular-cli/tree/main/packages/angular_devkit/schematics#provided-rules). Running your library schematic ------------------------------ After you build your library and schematics, you can install the schematics collection to run against your project. The following steps show you how to generate a service using the schematic you created earlier. ### Build your library and schematics From the root of your workspace, run the `ng build` command for your library. ``` ng build my-lib ``` Then, you change into your library directory to build the schematic ``` cd projects/my-lib npm run build ``` ### Link the library Your library and schematics are packaged and placed in the `dist/my-lib` folder at the root of your workspace. For running the schematic, you need to link the library into your `node_modules` folder. From the root of your workspace, run the `npm link` command with the path to your distributable library. ``` npm link dist/my-lib ``` ### Run the schematic Now that your library is installed, run the schematic using the `ng generate` command. ``` ng generate my-lib:my-service --name my-data ``` In the console, you see that the schematic was run and the `my-data.service.ts` file was created in your application folder. ``` CREATE src/app/my-data.service.ts (208 bytes) ``` Last reviewed on Mon Feb 28 2022
programming_docs
angular Angular Roadmap Angular Roadmap =============== Last updated: 2022-11-05 Angular receives a large number of feature requests, both from inside Google and from the broader open-source community. At the same time, our list of projects contains plenty of maintenance tasks, code refactorings, and potential performance improvements. We bring together representatives from developer relations, product management, and engineering to prioritize this list. As new projects come into the queue, we regularly position them based on relative priority to other projects. As work gets done, projects move up in the queue. The following projects are not associated with a particular Angular version. We will release them on completion, and they will be part of a specific version based on our release schedule, following semantic versioning. For example, features are released in the next minor after they are complete, or the next major if they include breaking changes. In progress ----------- ### Explore hydration and server-side rendering usability improvements As the first step of this project we will implement non-destructive hydration. This technique will allow us to reuse the server-side rendered DOM and rather than rerendering it only attach event listeners and create data structures required by the Angular runtime. As the next step, we are going to further explore the dynamically evolving space of partial hydration and resumability. Each of the approaches has their trade-offs and we'd like to make an informed decision what's the most optimal long-term solution for Angular. ### Improve runtime performance and make Zone.js optional As part of this effort we are revisiting Angular's reactivity model to make Zone.js optional and improve runtime performance. By default Angular runs change detection globally, traversing the entire component tree. We're exploring options to run change detection only in affected components. This way, we simplify the framework, improve debugging, and reduce application bundle size. Additionally, this lets us take advantage of built-in async/await syntax, which currently Zone.js does not support. ### Improve documentation and schematics for standalone components We are working on developing an `ng new` collection for applications bootstrapped with a standalone component. Additionally, we are filling the documentation gaps of the simplified standalone component APIs. ### Introduce dependency injection debugging APIs To improve the debugging utilities of Angular and Angular DevTools, we'll work on APIs that provide access the dependency injection runtime. As part of the project we'll expose debugging methods that allow us to explore the injector hierarchy and the dependencies across their associated providers. ### Streamline standalone imports with Language Service As part of this initiative we are going to implement automatic import of template dependencies for standalone components. Additionally, to enable smaller application bundles the language service will propose automatic removal of unused imports. ### Investigate modern bundles To improve development experience by speeding up build times, we plan to explore options to improve JavaScript bundles created by Angular CLI. As part of the project experiment with [esbuild](https://esbuild.github.io) and other open source solutions, compare them with the state-of-the-art tooling in Angular CLI, and report the findings. In Angular v15 we have experimental esbuild support in `ng build` and `ng build --watch`. We'll continue iterating on the solution until we're confident to release it as stable. ### New CDK primitives We are working on new CDK primitives to facilitate creating custom components based on the WAI-ARIA design patterns for [Combobox](https://www.w3.org/TR/wai-aria-practices-1.1/#combobox). Angular v14 introduced stable [menu and dialog primitives](https://material.angular.io/cdk/categories) as part of this project and in v15 [Listbox](https://www.w3.org/TR/wai-aria-practices-1.1/#Listbox). ### Angular component accessibility We are evaluating components in Angular Material against accessibility standards such as WCAG and working to fix any issues that arise from this process. ### Documentation refactoring Ensure all existing documentation fits into a consistent set of content types. Update excessive use of tutorial-style documentation into independent topics. We want to ensure the content outside the main tutorials is self-sufficient without being tightly coupled to a series of guides. In Q2 2022, we refactored the [template content](https://github.com/angular/angular/pull/45897). The next steps are to introduce better structure for components and dependency injection. ### Investigate micro frontend architecture for scalable development processes For the past couple of quarters we understood and defined the problem space. We are going to follow up with a series of blog posts on best practices when developing applications at scale. ### Update getting started tutorial We're working on updating the Angular getting started experience with standalone components. As part of this initiative, we'd like to create a new textual and video tutorials. ### Improvements in the image directive We released the Angular [image directive](https://developer.chrome.com/blog/angular-image-directive/) as stable in v15. We introduced a new fill mode feature that enables images to fit within their parent container rather than having explicit dimensions. Currently, this feature is in [developer preview](releases#developer-preview). Next we'll be working on collecting feedback from developers before we promote fill mode as stable. Future ------ ### Token-based theming APIs To provide better customization of our Angular material components and enable Material 3 capabilities, we'll be collaborating with Google's Material Design team on defining token-based theming APIs. ### Modernize Angular's unit testing experience In v12 we revisited the Angular end-to-end testing experience by replacing Protractor with modern alternatives such as Cypress, Nightwatch, and Webdriver.io. Next we'd like to tackle `ng test` to modernize Angular's unit testing experience. ### Revamp performance dashboards to detect regressions We have a set of benchmarks that we run against every code change to ensure Angular aligns with our performance standards. To ensure the runtime of the framework does not regress after a code change, we need to refine some of the existing infrastructure the dashboards step on. ### Improved build performance with ngc as a tsc plugin distribution Distributing the Angular compiler as a plugin of the TypeScript compiler will substantially improve build performance for developers and reduce maintenance costs. ### Ergonomic component level code-splitting APIs A common problem with web applications is their slow initial load time. A way to improve it is to apply more granular code-splitting on a component level. To encourage this practice, we will be working on more ergonomic code-splitting APIs. ### Ensure smooth adoption for future RxJS changes (version 8 and beyond) We want to ensure Angular developers are taking advantage of the latest capabilities of RxJS and have a smooth transition to the next major releases of the framework. For this purpose, we will explore and document the scope of the changes in v7 and beyond RxJS, and plan an update strategy. ### Support two-dimensional drag-and-drop As part of this project we'd like to implement mixed orientation support for the Angular CDK drag and drop. This is one of the most highly [requested features](https://github.com/angular/components/issues/13372) in the repository. Completed --------- Show all Hide all ### Improve image performance *Completed Q4 2022* The [Aurora](https://web.dev/introducing-aurora/) and the Angular teams are working on the implementation of an image directive that aims to improve [Core Web Vitals](https://web.dev/vitals). We shipped a stable version of the image directive in v15. ### Modern CSS *Completed Q4 2022* The Web ecosystem evolves constantly and we want to reflect the latest modern standards in Angular. In this project we aim to provide guidelines on using modern CSS features in Angular to ensure developers follow best practices for layout, styling, etc. We shared official guidelines for layout and as part of the initiative stopped publishing flex layout. Learn [more on our blog](https://blog.angular.io/modern-css-in-angular-layouts-4a259dca9127). ### Support adding directives to host elements *Completed Q4 2022* A [long-standing feature request](https://github.com/angular/angular/issues/8785) is to add the ability to add directives to host elements. The feature lets developers augment their own components with additional behaviors without using inheritance. In v15 we shipped our directive composition API, which enables enhancing host elements with directives. ### Better stack traces *Completed Q4 2022* The Angular and the Chrome DevTools are working together to enable more readable stack traces for error messages. In v15 we [released improved](https://twitter.com/angular/status/1578807563017392128) relevant and linked stack traces. As a lower priority initiative, we'll be exploring how to make the stack traces friendlier by providing more accurate call frame names for templates. ### Enhanced Angular Material components by integrating MDC Web *Completed Q4 2022* [MDC Web](https://material.io/develop/web) is a library created by the Google Material Design team that provides reusable primitives for building Material Design components. The Angular team is incorporating these primitives into Angular Material. Using MDC Web aligns Angular Material more closely with the Material Design specification, expand accessibility, improve component quality, and improve the velocity of our team. ### Implement APIs for optional NgModules *Completed Q4 2022* In the process of making Angular simpler, we are working on [introducing APIs](standalone-components) that allow developers to initialize applications, instantiate components, and use the router without NgModules. Angular v14 introduces developer preview of the APIs for standalone components, directives, and pipes. In the next few quarters we'll collect feedback from developers and finalize the project making the APIs stable. As the next step we will work on improving use cases such as `[TestBed](../api/core/testing/testbed)`, Angular elements, etc. ### Allow binding to protected fields in templates *Completed Q2 2022* To improve the encapsulation of Angular components we enabled binding to protected members of the component instance. This way you'll no longer have to expose a field or a method as public to use it inside your templates. ### Publish guides on advanced concepts *Completed Q2 2022* Develop and publish an in-depth guide on change detection. Develop content for performance profiling of Angular applications. Cover how change detection interacts with Zone.js and explain when it gets triggered, how to profile its duration, as well as common practices for performance optimization. ### Rollout strict typings for `@angular/forms` *Completed Q2 2022* In Q4 2021 we designed a solution for introducing strict typings for forms and in Q1 2022 we concluded the corresponding [request for comments](https://github.com/angular/angular/discussions/44513). Currently, we are implementing a rollout strategy with an automated migration step that will enable the improvements for existing projects. We are first testing the solution with more than 2,500 projects at Google to ensure a smooth migration path for the external community. ### Remove legacy [View Engine](glossary#ve) *Completed Q1 2022* After the transition of all our internal tooling to Ivy is completed, we will remove the legacy View Engine for reduced Angular conceptual overhead, smaller package size, lower maintenance cost, and lower codebase complexity. ### Simplified Angular mental model with optional NgModules *Completed Q1 2022* To simplify the Angular mental model and learning journey, we will be working on making NgModules optional. This work lets developers develop standalone components and implement an alternative API for declaring the compilation scope of the component. We kicked this project off with high-level design discussions that we captured in an [RFC](https://github.com/angular/angular/discussions/43784). ### Design strict typing for `@angular/forms` *Completed Q1 2022* We will work on finding a way to implement stricter type checking for reactive forms with minimal backward incompatible implications. This way, we let developers catch more issues during development time, enable better text editor and IDE support, and improve the type checking for reactive forms. ### Improve integration of Angular DevTools with framework *Completed Q1 2022* To improve the integration of Angular DevTools with the framework, we are working on moving the codebase to the [angular/angular](https://github.com/angular/angular) monorepository. This includes transitioning Angular DevTools to Bazel and integrating it into the existing processes and CI pipeline. ### Launch advanced compiler diagnostics *Completed Q1 2022* Extend the diagnostics of the Angular compiler outside type checking. Introduce other correctness and conformance checks to further guarantee correctness and best practices. ### Update our e2e testing strategy *Completed Q3 2021* To ensure we provide a future-proof e2e testing strategy, we want to evaluate the state of Protractor, community innovations, e2e best practices, and explore novel opportunities. As first steps of the effort, we shared an [RFC](https://github.com/angular/protractor/issues/5502) and worked with partners to ensure smooth integration between the Angular CLI and state-of-the-art tooling for e2e testing. As the next step, we need to finalize the recommendations and compile a list of resources for the transition. ### Angular libraries use Ivy *Completed Q3 2021* Earlier in 2020, we shared an [RFC](https://github.com/angular/angular/issues/38366) for Ivy library distribution. After invaluable feedback from the community, we developed a design of the project. We are now investing in the development of Ivy library distribution, including an update of the library package format to use Ivy compilation, unblock the deprecation of the View Engine library format, and [ngcc](glossary#ngcc). ### Improve test times and debugging with automatic test environment tear down *Completed Q3 2021* To improve test time and create better isolation across tests, we want to change [`TestBed`](../api/core/testing/testbed) to automatically clean up and tear down the test environment after each test run. ### Deprecate and remove IE11 support *Completed Q3 2021* Internet Explorer 11 (IE11) has been preventing Angular from taking advantage of some of the modern features of the Web platform. As part of this project we are going to deprecate and remove IE11 support to open the path for modern features that evergreen browsers provide. We ran an [RFC](https://github.com/angular/angular/issues/41840) to collect feedback from the community and decide on next steps to move forward. ### Leverage ES2017+ as the default output language *Completed Q3 2021* Supporting modern browsers lets us take advantage of the more compact, expressive, and performant new syntax of JavaScript. As part of this project we will investigate what the blockers are to moving forward with this effort, and take the steps to enable it. ### Accelerated debugging and performance profiling with Angular DevTools *Completed Q2 2021* We are working on development tooling for Angular that provides utilities for debugging and performance profiling. This project aims to help developers understand the component structure and the change detection in an Angular application. ### Streamline releases with consolidated Angular versioning & branching *Completed Q2 2021* We want to consolidate release management tooling between the multiple GitHub repositories for Angular ([angular/angular](https://github.com/angular/angular), [angular/angular-cli](https://github.com/angular/angular-cli), and [angular/components](https://github.com/angular/components)). This effort lets us reuse infrastructure, unify and simplify processes, and improve the reliability of our release process. ### Higher developer consistency with commit message standardization *Completed Q2 2021* We want to unify commit message requirements and conformance across Angular repositories ([angular/angular](https://github.com/angular/angular), [angular/components](https://github.com/angular/components), and [angular/angular-cli](https://github.com/angular/angular-cli)) to bring consistency to our development process and reuse infrastructure tooling. ### Transition the Angular language service to Ivy *Completed Q2 2021* The goal of this project is to improve the experience and remove legacy dependency by transitioning the language service to Ivy. Today the language service still uses the View Engine compiler and type checking, even for Ivy applications. We want to use the Ivy template parser and improved type checking for the Angular Language service to match application behavior. This migration is also a step towards unblocking the removal of View Engine, which will simplify Angular, reduce the npm package size, and improve the maintainability of the framework. ### Increased security with native Trusted Types in Angular *Completed Q2 2021* In collaboration with the Google security team, we are adding support for the new [Trusted Types](https://web.dev/trusted-types) API. This web platform API helps developers build more secure web applications. ### Optimized build speed and bundle sizes with Angular CLI webpack 5 *Completed Q2 2021* As part of the v11 release, we introduced an opt-in preview of webpack 5 in the Angular CLI. To ensure stability, we will continue iterating on the implementation to enable build speed and bundle size improvements. ### Faster apps by inlining critical styles in Universal applications *Completed Q1 2021* Loading external stylesheets is a blocking operation, which means that the browser cannot start rendering your application until it loads all the referenced CSS. Having render-blocking resources in the header of a page can significantly impact its load performance, for example, its [first contentful paint](https://web.dev/first-contentful-paint). To make apps faster, we have been collaborating with the Google Chrome team on inlining critical CSS and loading the rest of the styles asynchronously. ### Improve debugging with better Angular error messages *Completed Q1 2021* Error messages often bring limited actionable information to help developers resolve them. We have been working on making error messages more discoverable by adding associated codes, developing guides, and other materials to ensure a smoother debugging experience. ### Improved developer onboarding with refreshed introductory documentation *Completed Q1 2021* We will redefine the user learning journeys and refresh the introductory documentation. We will clearly state the benefits of Angular, how to explore its capabilities and provide guidance so developers can become proficient with the framework in as little time as possible. ### Expand component harnesses best practices *Completed Q1 2021* Angular CDK introduced the concept of [component test harnesses](https://material.angular.io/cdk/test-harnesses) to Angular in version 9. Test harnesses let component authors create supported APIs for testing component interactions. We are continuing to improve this harness infrastructure and clarifying the best practices around using harnesses. We are also working to drive more harness adoption inside of Google. ### Author a guide for content projection *Completed Q2 2021* Content projection is a core Angular concept that does not have the presence it deserves in the documentation. As part of this project we want to identify the core use cases and concepts for content projection and document them. ### Migrate to ESLint *Completed Q4 2020* With the deprecation of TSLint we will be moving to ESLint. As part of the process, we will work on ensuring backward compatibility with our current recommended TSLint configuration, implement a migration strategy for existing Angular applications and introduce new tooling to the Angular CLI toolchain. ### Operation Bye Bye Backlog (also known as Operation Byelog) *Completed Q4 2020* We are actively investing up to 50% of our engineering capacity on triaging issues and PRs until we have a clear understanding of broader community needs. After that, we will commit up to 20% of our engineering capacity to keep up with new submissions promptly. Last reviewed on Mon Feb 28 2022
programming_docs
angular Finish up a documentation pull request Finish up a documentation pull request ====================================== This topic describes how to keep your workspace tidy after your pull request is merged and closed. Review the commit log of the upstream repo ------------------------------------------ This procedure confirms that your commit is now in the `main` branch of the `angular/angular` repo. #### To review the commit log on `github.com` for your commit In a web browser, open [`https://https://github.com/angular/angular/commits/main`](https://github.com/angular/angular/commits/main). 1. Review the commit list. 1. Find the entry with your GitHub username, commit message, and pull request number of your commit. The commit number might not match the commit from your working branch because of how commits are merged. 2. If you see your commit listed, your commit has been merged into `angular/angular` and you can continue cleaning up your workspace. 3. If you don't see your commit in the list, you might need to wait before you retry this step. Do not continue cleaning your workspace until you see your commit listed in or after the log entry that contains `origin/main`. 4. If you see your commit listed above the log entry that contains `origin/main`, then you might need to update your clone of the `angular/angular` repo again. Update your fork from the upstream repo --------------------------------------- After you see that the commit from your pull request has been merged into the upstream `angular/angular` repo, update your fork. This procedure updates your clone of `personal/angular` on your local computer and then, the repo in the cloud. #### To update your fork with the upstream repo Perform these steps from a command-line tool on your local computer. 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to check out the `main` branch. ``` git checkout main ``` 3. Run this command to update the `main` branch in the `working` directory on your local computer from the upstream `angular/angular` repo. ``` git fetch upstream git merge upstream/main ``` 4. Run this command to update your `personal/angular` repo on `github.com` with the latest from the upstream `angular/angular` repo. ``` git push ``` 5. Run this command to review the commit log of your fork. The `main` branch on your local computer and your origin repo on `github.com` are now in sync with the upstream `angular/angular` repo. Run this command to list the recent commits. ``` git log --pretty=format:"%h %as %an %Cblue%s %Cgreen%D" ``` 6. In the output of the previous `git log` command, find the entry with your GitHub username, commit message, and pull request number of your commit. The commit number might not match the commit from your working branch because of how commits are merged. You should find the commit from your pull request in or near the log entry that contains `upstream/main`. If you find the commit from your pull request in the correct place, you can continue to delete your working branch. Delete the working branch ------------------------- After you confirm that your pull request is merged into `angular/angular` and appears in the `main` branch of your fork, you can delete the `working` branch. Because your working branch was merged into the `main` branch of your fork, and the pull request has been closed, you no longer need the `working` branch. It might be tempting to keep it around, just in case, but it is probably not necessary. If you keep all your old working branches, your repository can collect unnecessary clutter. #### To delete your working branch 1. From your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, run this command to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). Remember to replace `personal` with your GitHub username. ``` cd personal/angular ``` 2. Run this command to check out the `main` branch. ``` git checkout main ``` 3. Run this command to delete the working branch used in the pull request from your local computer. Replace `working-branch-name` with the name of your working branch. ``` git branch -d working-branch-name ``` 4. Run this command to delete the working branch from your `personal/angular` repo on `github.com`. Replace `working-branch-name` with the name of your working branch. ``` git push -d origin working-branch-name ``` Next step --------- After you delete the working branch for your last issue, you're ready to [select another issue to resolve](doc-select-issue). Last reviewed on Wed Oct 12 2022 angular Tutorial: Creating custom route matches Tutorial: Creating custom route matches ======================================= The Angular Router supports a powerful matching strategy that you can use to help users navigate your application. This matching strategy supports static routes, variable routes with parameters, wildcard routes, and so on. Also, build your own custom pattern matching for situations in which the URLs are more complicated. In this tutorial, you'll build a custom route matcher using Angular's `[UrlMatcher](../api/router/urlmatcher)`. This matcher looks for a Twitter handle in the URL. For a working example of the final version of this tutorial, see the live example. Objectives ---------- Implement Angular's `[UrlMatcher](../api/router/urlmatcher)` to create a custom route matcher. Prerequisites ------------- To complete this tutorial, you should have a basic understanding of the following concepts: * JavaScript * HTML * CSS * [Angular CLI](cli) If you are unfamiliar with how Angular's router works, review [Using Angular routes in a single-page application](router-tutorial). Create a sample application --------------------------- Using the Angular CLI, create a new application, *angular-custom-route-match*. In addition to the default Angular application framework, you will also create a *profile* component. 1. Create a new Angular project, *angular-custom-route-match*. ``` ng new angular-custom-route-match ``` When prompted with `Would you like to add Angular routing?`, select `Y`. When prompted with `Which stylesheet format would you like to use?`, select `CSS`. After a few moments, a new project, `angular-custom-route-match`, is ready. 2. From your terminal, navigate to the `angular-custom-route-match` directory. 3. Create a component, *profile*. ``` ng generate component profile ``` 4. In your code editor, locate the file, `profile.component.html` and replace the placeholder content with the following HTML. ``` <p> Hello {{ username$ | async }}! </p> ``` 5. In your code editor, locate the file, `app.component.html` and replace the placeholder content with the following HTML. ``` <h2>Routing with Custom Matching</h2> Navigate to <a routerLink="/@Angular">my profile</a> <router-outlet></router-outlet> ``` Configure your routes for your application ------------------------------------------ With your application framework in place, you next need to add routing capabilities to the `app.module.ts` file. As a part of this process, you will create a custom URL matcher that looks for a Twitter handle in the URL. This handle is identified by a preceding `@` symbol. 1. In your code editor, open your `app.module.ts` file. 2. Add an `import` statement for Angular's `[RouterModule](../api/router/routermodule)` and `[UrlMatcher](../api/router/urlmatcher)`. ``` import { RouterModule, UrlSegment } from '@angular/router'; ``` 3. In the imports array, add a `RouterModule.forRoot([])` statement. ``` @NgModule({ imports: [ BrowserModule, FormsModule, RouterModule.forRoot([ /* . . . */ ])], declarations: [ AppComponent, ProfileComponent ], bootstrap: [ AppComponent ] }) ``` 4. Define the custom route matcher by adding the following code to the `[RouterModule.forRoot()](../api/router/routermodule#forRoot)` statement. ``` { matcher: (url) => { if (url.length === 1 && url[0].path.match(/^@[\w]+$/gm)) { return { consumed: url, posParams: { username: new UrlSegment(url[0].path.slice(1), {}) } }; } return null; }, component: ProfileComponent } ``` This custom matcher is a function that performs the following tasks: * The matcher verifies that the array contains only one segment * The matcher employs a regular expression to ensure that the format of the username is a match * If there is a match, the function returns the entire URL, defining a `username` route parameter as a substring of the path * If there isn't a match, the function returns null and the router continues to look for other routes that match the URL A custom URL matcher behaves like any other route definition. Define child routes or lazy loaded routes as you would with any other route. Subscribe to the route parameters --------------------------------- With the custom matcher in place, you now need to subscribe to the route parameters in the `profile` component. 1. In your code editor, open your `profile.component.ts` file. 2. Add an `import` statement for Angular's `[ActivatedRoute](../api/router/activatedroute)` and `[ParamMap](../api/router/parammap)`. ``` import { ActivatedRoute, ParamMap } from '@angular/router'; ``` 3. Add an `import` statement for RxJS `map`. ``` import { map } from 'rxjs/operators'; ``` 4. Subscribe to the `username` route parameter. ``` username$ = this.route.paramMap .pipe( map((params: ParamMap) => params.get('username')) ); ``` 5. Inject the `[ActivatedRoute](../api/router/activatedroute)` into the component's constructor. ``` constructor(private route: ActivatedRoute) { } ``` Test your custom URL matcher ---------------------------- With your code in place, you can now test your custom URL matcher. 1. From a terminal window, run the `ng serve` command. ``` ng serve ``` 2. Open a browser to `http://localhost:4200`. You should see a single web page, consisting of a sentence that reads `Navigate to my profile`. 3. Click the **my profile** hyperlink. A new sentence, reading `Hello, Angular!` appears on the page. Next steps ---------- Pattern matching with the Angular Router provides you with a lot of flexibility when you have dynamic URLs in your application. To learn more about the Angular Router, see the following topics: * [In-app Routing and Navigation](router) * [Router API](../api/router) > This content is based on [Custom Route Matching with the Angular Router](https://medium.com/@brandontroberts/custom-route-matching-with-the-angular-router-fbdd48665483), by [Brandon Roberts](https://twitter.com/brandontroberts). > > Last reviewed on Mon Feb 28 2022 angular Strict mode Strict mode =========== Angular CLI creates all new workspaces and projects with **strict mode** enabled. Strict mode improves maintainability and helps you catch bugs ahead of time. Additionally, strict mode applications are easier to statically analyze and can help the `ng update` command refactor code more safely and precisely when you are updating to future versions of Angular. Specifically, strict mode affects newly generated applications in the following way: * Enables [`strict` mode in TypeScript](https://www.typescriptlang.org/tsconfig#strict), as well as other strictness flags recommended by the TypeScript team. Specifically, `forceConsistentCasingInFileNames`, `noImplicitReturns`, and `noFallthroughCasesInSwitch`. * Turns on strict Angular compiler flags [`strictTemplates`](angular-compiler-options#stricttemplates), [`strictInjectionParameters`](angular-compiler-options#strictinjectionparameters), and [`strictInputAccessModifiers`](template-typecheck#troubleshooting-template-errors). * Reduces the [bundle size budgets](build#configuring-size-budgets) for the `initial` and `anyComponentStyle` budget types by 75% compared to the previous defaults. You can apply these settings at the workspace and project level. Using the basic `ng new` command to create a new workspace and application automatically uses strict mode, as in the following command: ``` ng new [project-name] ``` To create a new application in the strict mode within an existing non-strict workspace, run the following command: ``` ng generate application [project-name] --strict ``` Last reviewed on Mon Feb 28 2022 angular Service worker notifications Service worker notifications ============================ Push notifications are a compelling way to engage users. Through the power of service workers, notifications can be delivered to a device even when your application is not in focus. The Angular service worker enables the display of push notifications and the handling of notification click events. > When using the Angular service worker, push notification interactions are handled using the `[SwPush](../api/service-worker/swpush)` service. To learn more about the browser APIs involved see [Push API](https://developer.mozilla.org/docs/Web/API/Push_API) and [Using the Notifications API](https://developer.mozilla.org/docs/Web/API/Notifications_API/Using_the_Notifications_API). > > Prerequisites ------------- We recommend you have a basic understanding of the following: * [Getting Started with Service Workers](service-worker-getting-started) Notification payload -------------------- Invoke push notifications by pushing a message with a valid payload. See `[SwPush](../api/service-worker/swpush)` for guidance. > In Chrome, you can test push notifications without a backend. Open Devtools -> Application -> Service Workers and use the `Push` input to send a JSON notification payload. > > Notification click handling --------------------------- The default behavior for the `notificationclick` event is to close the notification and notify `[SwPush.notificationClicks](../api/service-worker/swpush#notificationClicks)`. You can specify an additional operation to be executed on `notificationclick` by adding an `onActionClick` property to the `data` object, and providing a `default` entry. This is especially useful for when there are no open clients when a notification is clicked. ``` { "notification": { "title": "New Notification!", "data": { "onActionClick": { "default": {"operation": "openWindow", "url": "foo"} } } } } ``` ### Operations The Angular service worker supports the following operations: | Operations | Details | | --- | --- | | `openWindow` | Opens a new tab at the specified URL. | | `focusLastFocusedOrOpen` | Focuses the last focused client. If there is no client open, then it opens a new tab at the specified URL. | | `navigateLastFocusedOrOpen` | Focuses the last focused client and navigates it to the specified URL. If there is no client open, then it opens a new tab at the specified URL. | | `sendRequest` | Send a simple GET request to the specified URL. | > URLs are resolved relative to the service worker's registration scope. If an `onActionClick` item does not define a `url`, then the service worker's registration scope is used. > > ### Actions Actions offer a way to customize how the user can interact with a notification. Using the `actions` property, you can define a set of available actions. Each action is represented as an action button that the user can click to interact with the notification. In addition, using the `onActionClick` property on the `data` object, you can tie each action to an operation to be performed when the corresponding action button is clicked: ``` { "notification": { "title": "New Notification!", "actions": [ {"action": "foo", "title": "Open new tab"}, {"action": "bar", "title": "Focus last"}, {"action": "baz", "title": "Navigate last"}, {"action": "qux", "title": "Send request in the background"} {"action": "other", "title": "Just notify existing clients"} ], "data": { "onActionClick": { "default": {"operation": "openWindow"}, "foo": {"operation": "openWindow", "url": "/absolute/path"}, "bar": {"operation": "focusLastFocusedOrOpen", "url": "relative/path"}, "baz": {"operation": "navigateLastFocusedOrOpen", "url": "https://other.domain.com/"}, "qux": {"operation": "sendRequest", "url": "https://yet.another.domain.com/"} } } } } ``` > If an action does not have a corresponding `onActionClick` entry, then the notification is closed and `[SwPush.notificationClicks](../api/service-worker/swpush#notificationClicks)` is notified on existing clients. > > More on Angular service workers ------------------------------- You might also be interested in the following: * [Service Worker in Production](service-worker-devops) Last reviewed on Mon Feb 28 2022 angular Testing Testing ======= Testing your Angular application helps you check that your application is working as you expect. Prerequisites ------------- Before writing tests for your Angular application, you should have a basic understanding of the following concepts: * [Angular fundamentals](architecture) * [JavaScript](https://javascript.info/) * HTML * CSS * [Angular CLI](cli) The testing documentation offers tips and techniques for unit and integration testing Angular applications through a sample application created with the [Angular CLI](cli). This sample application is much like the one in the [*Tour of Heroes* tutorial](../tutorial/tour-of-heroes). > If you'd like to experiment with the application that this guide describes, run it in your browser or download and run it locally. > > Set up testing -------------- The Angular CLI downloads and installs everything you need to test an Angular application with [Jasmine testing framework](https://jasmine.github.io). The project you create with the CLI is immediately ready to test. Just run the [`ng test`](cli/test) CLI command: ``` ng test ``` The `ng test` command builds the application in *watch mode*, and launches the [Karma test runner](https://karma-runner.github.io). The console output looks the below: ``` 02 11 2022 09:08:28.605:INFO [karma-server]: Karma v6.4.1 server started at http://localhost:9876/ 02 11 2022 09:08:28.607:INFO [launcher]: Launching browsers Chrome with concurrency unlimited 02 11 2022 09:08:28.620:INFO [launcher]: Starting browser Chrome 02 11 2022 09:08:31.312:INFO [Chrome]: Connected on socket -LaEYvD2R7MdcS0-AAAB with id 31534482 Chrome: Executed 3 of 3 SUCCESS (0.193 secs / 0.172 secs) TOTAL: 3 SUCCESS ``` The last line of the log shows that Karma ran three tests that all passed. The test output is displayed in the browser using [Karma Jasmine HTML Reporter](https://github.com/dfederm/karma-jasmine-html-reporter). Click on a test row to re-run just that test or click on a description to re-run the tests in the selected test group ("test suite"). Meanwhile, the `ng test` command is watching for changes. To see this in action, make a small change to `app.component.ts` and save. The tests run again, the browser refreshes, and the new test results appear. Configuration ------------- The Angular CLI takes care of Jasmine and Karma configuration for you. It constructs the full configuration in memory, based on options specified in the `angular.json` file. If you want to customize Karma, you can create a `karma.conf.js` by running the following command: ``` ng generate config karma ``` > Read more about Karma configuration in the [Karma configuration guide](http://karma-runner.github.io/6.4/config/configuration-file.html). > > ### Other test frameworks You can also unit test an Angular application with other testing libraries and test runners. Each library and runner has its own distinctive installation procedures, configuration, and syntax. ### Test file name and location Inside the `src/app` folder the Angular CLI generated a test file for the `AppComponent` named `app.component.spec.ts`. > The test file extension **must be `.spec.ts`** so that tooling can identify it as a file with tests (also known as a *spec* file). > > The `app.component.ts` and `app.component.spec.ts` files are siblings in the same folder. The root file names (`app.component`) are the same for both files. Adopt these two conventions in your own projects for *every kind* of test file. #### Place your spec file next to the file it tests It's a good idea to put unit test spec files in the same folder as the application source code files that they test: * Such tests are painless to find * You see at a glance if a part of your application lacks tests * Nearby tests can reveal how a part works in context * When you move the source (inevitable), you remember to move the test * When you rename the source file (inevitable), you remember to rename the test file #### Place your spec files in a test folder Application integration specs can test the interactions of multiple parts spread across folders and modules. They don't really belong to any part in particular, so they don't have a natural home next to any one file. It's often better to create an appropriate folder for them in the `tests` directory. Of course specs that test the test helpers belong in the `test` folder, next to their corresponding helper files. Testing in continuous integration --------------------------------- One of the best ways to keep your project bug-free is through a test suite, but you might forget to run tests all the time. Continuous integration (CI) servers let you set up your project repository so that your tests run on every commit and pull request. To test your Angular CLI application in Continuous integration (CI) run the following command: ``` ng test --no-watch --no-progress ``` More information on testing --------------------------- After you've set up your application for testing, you might find the following testing guides useful. | | Details | | --- | --- | | [Code coverage](testing-code-coverage) | How much of your app your tests are covering and how to specify required amounts. | | [Testing services](testing-services) | How to test the services your application uses. | | [Basics of testing components](testing-components-basics) | Basics of testing Angular components. | | [Component testing scenarios](testing-components-scenarios) | Various kinds of component testing scenarios and use cases. | | [Testing attribute directives](testing-attribute-directives) | How to test your attribute directives. | | [Testing pipes](testing-pipes) | How to test pipes. | | [Debugging tests](test-debugging) | Common testing bugs. | | [Testing utility APIs](testing-utility-apis) | Angular testing features. | Last reviewed on Tue Jan 17 2023
programming_docs
angular Keeping your Angular projects up-to-date Keeping your Angular projects up-to-date ======================================== Just like Web and the entire web ecosystem, Angular is continuously improving. Angular balances continuous improvement with a strong focus on stability and making updates straightforward. Keeping your Angular application up-to-date enables you to take advantage of leading-edge new features, as well as optimizations and bug fixes. This document contains information and resources to help you keep your Angular applications and libraries up-to-date. For information about our versioning policy and practices —including support and deprecation practices, as well as the release schedule— see [Angular versioning and releases](releases "Angular versioning and releases"). > If you are currently using AngularJS, see [Upgrading from AngularJS](upgrade "Upgrading from Angular JS"). *AngularJS* is the name for all v1.x versions of Angular. > > Getting notified of new releases -------------------------------- To be notified when new releases are available, follow [@angular](https://twitter.com/angular "@angular on Twitter") on Twitter or subscribe to the [Angular blog](https://blog.angular.io "Angular blog"). Learning about new features --------------------------- What's new? What's changed? We share the most important things you need to know on the Angular blog in [release announcements](https://blog.angular.io/tagged/release%20notes "Angular blog - release announcements"). To review a complete list of changes, organized by version, see the [Angular change log](https://github.com/angular/angular/blob/main/CHANGELOG.md "Angular change log"). Checking your version of Angular -------------------------------- To check your application's version of Angular: From within your project directory, use the `ng version` command. Finding the current version of Angular -------------------------------------- The most recent stable released version of Angular appears in the [Angular documentation](docs "Angular documentation") at the bottom of the left side navigation. For example, `stable (v13.0.3)`. You can also find the most current version of Angular by using the CLI command [`ng update`](cli/update). By default, [`ng update`](cli/update)(without additional arguments) lists the updates that are available to you. Updating your environment and apps ---------------------------------- To make updating uncomplicated, we provide complete instructions in the interactive [Angular Update Guide](https://update.angular.io/ "Angular Update Guide"). The Angular Update Guide provides customized update instructions, based on the current and target versions that you specify. It includes basic and advanced update paths, to match the complexity of your applications. It also includes troubleshooting information and any recommended manual changes to help you get the most out of the new release. For simple updates, the CLI command [`ng update`](cli/update) is all you need. Without additional arguments, [`ng update`](cli/update) lists the updates that are available to you and provides recommended steps to update your application to the most current version. [Angular Versioning and Releases](releases#versioning "Angular Release Practices, Versioning") describes the level of change that you can expect based a release's version number. It also describes supported update paths. Resource summary ---------------- * Release announcements: [Angular blog - release announcements](https://blog.angular.io/tagged/release%20notes "Angular blog announcements about recent releases") * Release announcements (older): [Angular blog - announcements about releases prior to August 2017](https://blog.angularjs.org/search?q=available&by-date=true "Angular blog announcements about releases prior to August 2017") * Release details: [Angular change log](https://github.com/angular/angular/blob/main/CHANGELOG.md "Angular change log") * Update instructions: [Angular Update Guide](https://update.angular.io/ "Angular Update Guide") * Update command reference: [Angular CLI `ng update` command reference](cli/update) * Versioning, release, support, and deprecation practices: [Angular versioning and releases](releases "Angular versioning and releases") Last reviewed on Mon Feb 28 2022 angular Angular service worker introduction Angular service worker introduction =================================== Service workers augment the traditional web deployment model and empower applications to deliver a user experience with the reliability and performance on par with code that is written to run on your operating system and hardware. Adding a service worker to an Angular application is one of the steps for turning an application into a [Progressive Web App](https://web.dev/progressive-web-apps/) (also known as a PWA). At its simplest, a service worker is a script that runs in the web browser and manages caching for an application. Service workers function as a network proxy. They intercept all outgoing HTTP requests made by the application and can choose how to respond to them. For example, they can query a local cache and deliver a cached response if one is available. Proxying isn't limited to requests made through programmatic APIs, such as `fetch`; it also includes resources referenced in HTML and even the initial request to `index.html`. Service worker-based caching is thus completely programmable and doesn't rely on server-specified caching headers. Unlike the other scripts that make up an application, such as the Angular application bundle, the service worker is preserved after the user closes the tab. The next time that browser loads the application, the service worker loads first, and can intercept every request for resources to load the application. If the service worker is designed to do so, it can *completely satisfy the loading of the application, without the need for the network*. Even across a fast reliable network, round-trip delays can introduce significant latency when loading the application. Using a service worker to reduce dependency on the network can significantly improve the user experience. Service workers in Angular -------------------------- Angular applications, as single-page applications, are in a prime position to benefit from the advantages of service workers. Starting with version 5.0.0, Angular ships with a service worker implementation. Angular developers can take advantage of this service worker and benefit from the increased reliability and performance it provides, without needing to code against low-level APIs. Angular's service worker is designed to optimize the end user experience of using an application over a slow or unreliable network connection, while also minimizing the risks of serving outdated content. To achieve this, the Angular service worker follows these guidelines: * Caching an application is like installing a native application. The application is cached as one unit, and all files update together. * A running application continues to run with the same version of all files. It does not suddenly start receiving cached files from a newer version, which are likely incompatible. * When users refresh the application, they see the latest fully cached version. New tabs load the latest cached code. * Updates happen in the background, relatively quickly after changes are published. The previous version of the application is served until an update is installed and ready. * The service worker conserves bandwidth when possible. Resources are only downloaded if they've changed. To support these behaviors, the Angular service worker loads a *manifest* file from the server. The file, called `ngsw.json` (not to be confused with the [web app manifest](https://developer.mozilla.org/docs/Web/Manifest)), describes the resources to cache and includes hashes of every file's contents. When an update to the application is deployed, the contents of the manifest change, informing the service worker that a new version of the application should be downloaded and cached. This manifest is generated from a CLI-generated configuration file called `ngsw-config.json`. Installing the Angular service worker is as straightforward as including an `[NgModule](../api/core/ngmodule)`. In addition to registering the Angular service worker with the browser, this also makes a few services available for injection which interact with the service worker and can be used to control it. For example, an application can ask to be notified when a new update becomes available, or an application can ask the service worker to check the server for available updates. Prerequisites ------------- To make use of all the features of Angular service workers, use the latest versions of Angular and the Angular CLI. For service workers to be registered, the application must be accessed over HTTPS, not HTTP. Browsers ignore service workers on pages that are served over an insecure connection. The reason is that service workers are quite powerful, so extra care is needed to ensure the service worker script has not been tampered with. There is one exception to this rule: to make local development more straightforward, browsers do *not* require a secure connection when accessing an application on `localhost`. ### Browser support To benefit from the Angular service worker, your application must run in a web browser that supports service workers in general. Currently, service workers are supported in the latest versions of Chrome, Firefox, Edge, Safari, Opera, UC Browser (Android version) and Samsung Internet. Browsers like IE and Opera Mini do not support service workers. If the user is accessing your application with a browser that does not support service workers, the service worker is not registered and related behavior such as offline cache management and push notifications does not happen. More specifically: * The browser does not download the service worker script and the `ngsw.json` manifest file * Active attempts to interact with the service worker, such as calling `[SwUpdate.checkForUpdate()](../api/service-worker/swupdate#checkForUpdate)`, return rejected promises * The observable events of related services, such as `[SwUpdate.available](../api/service-worker/swupdate#available)`, are not triggered It is highly recommended that you ensure that your application works even without service worker support in the browser. Although an unsupported browser ignores service worker caching, it still reports errors if the application attempts to interact with the service worker. For example, calling `[SwUpdate.checkForUpdate()](../api/service-worker/swupdate#checkForUpdate)` returns rejected promises. To avoid such an error, check whether the Angular service worker is enabled using `[SwUpdate.isEnabled](../api/service-worker/swupdate#isEnabled)`. To learn more about other browsers that are service worker ready, see the [Can I Use](https://caniuse.com/#feat=serviceworkers) page and [MDN docs](https://developer.mozilla.org/docs/Web/API/Service_Worker_API). Related resources ----------------- The rest of the articles in this section specifically address the Angular implementation of service workers. * [App Shell](app-shell) * [Service Worker Communication](service-worker-communications) * [Service Worker Notifications](service-worker-notifications) * [Service Worker in Production](service-worker-devops) * [Service Worker Configuration](service-worker-config) For more information about service workers in general, see [Service Workers: an Introduction](https://developers.google.com/web/fundamentals/primers/service-workers). For more information about browser support, see the [browser support](https://developers.google.com/web/fundamentals/primers/service-workers/#browser_support) section of [Service Workers: an Introduction](https://developers.google.com/web/fundamentals/primers/service-workers), Jake Archibald's [Is Serviceworker ready?](https://jakearchibald.github.io/isserviceworkerready), and [Can I Use](https://caniuse.com/serviceworkers). For additional recommendations and examples, see: * [Precaching with Angular Service Worker](https://web.dev/precaching-with-the-angular-service-worker) * [Creating a PWA with Angular CLI](https://web.dev/creating-pwa-with-angular-cli) Next steps ---------- To begin using Angular service workers, see [Getting Started with service workers](service-worker-getting-started). Last reviewed on Mon Feb 28 2022 angular Common Internationalization tasks Common Internationalization tasks ================================= Use the following Angular tasks to internationalize your project. * Use built-in pipes to display dates, numbers, percentages, and currencies in a local format. * Mark text in component templates for translation. * Mark plural forms of expressions for translation. * Mark alternate text for translation. After you prepare your project for an international audience, use the [Angular CLI](cli "CLI Overview and Command Reference | Angular") to localize your project. Complete the following tasks to localize your project. * Use the CLI to extract marked text to a *source language* file. * Make a copy of the *source language* file for each language, and send all of *translation* files to a translator or service. * Use the CLI to merge the finished translation files when you build your project for one or more locales. > Create an adaptable user interface for all of your target locales that takes into consideration the differences in spacing for different languages. For more details, see [How to approach internationalization](https://marketfinder.thinkwithgoogle.com/intl/en_us/guide/how-to-approach-i18n#overview "Overview - How to approach internationalization | Market Finder | Think with Google"). > > Prerequisites ------------- To prepare your project for translations, you should have a basic understanding of the following subjects. * [Templates](glossary#template "template - Glossary | Angular") * [Components](glossary#component "component - Glossary | Angular") * [Angular CLI](cli "CLI Overview and Command Reference | Angular") [command-line](glossary#command-line-interface-cli "command-line interface (CLI) - Glossary | Angular") tool for managing the Angular development cycle * [Extensible Markup Language (XML)](https://www.w3.org/XML "Extensible Markup Language (XML) | W3C") used for translation files Learn about common Angular internationalization tasks ----------------------------------------------------- [Add the localize package Learn how to add the Angular Localize package to your project Add the localize package](i18n-common-add-package "Add the localize package") [Refer to locales by ID Learn how to identify and specify a locale identifier for your project Refer to locales by ID](i18n-common-locale-id "Refer to locales by ID") [Format data based on locale Learn how to implement localized data pipes and override the locale for your project Format data based on locale](i18n-common-format-data-locale "Format data based on locale") [Prepare component for translation Learn how to specify source text for translation Prepare component for translation](i18n-common-prepare "Prepare component for translation") [Work with translation files Learn how to review and process translation text Work with translation files](i18n-common-translation-files "Work with translation files") [Merge translations into the application Learn how to merge translations and build your translated application Merge translations into the application](i18n-common-merge "Merge translations into the application") [Deploy multiple locales Learn how to deploy multiple locales for your application Deploy multiple locales](i18n-common-deploy "Deploy multiple locales") Last reviewed on Thu Oct 07 2021 angular Entry components Entry components ================ > Entry components are deprecated, for more information, see [entryComponents deprecation](deprecations#entrycomponents-and-analyze_for_entry_components-no-longer-required) in the [Deprecated APIs and features](deprecations). > > An entry component is any component that Angular loads imperatively, (which means you're not referencing it in the template), by type. You specify an entry component by bootstrapping it in an NgModule, or including it in a routing definition. > To contrast the two types of components, there are components which are included in the template, which are declarative. Additionally, there are components which you load imperatively; that is, entry components. > > There are two main kinds of entry components: * The bootstrapped root component * A component you specify in a route definition A bootstrapped entry component ------------------------------ The following is an example of specifying a bootstrapped component, `AppComponent`, in a basic `app.module.ts`: ``` @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, HttpClientModule, AppRoutingModule ], providers: [], bootstrap: [AppComponent] // bootstrapped entry component }) ``` A bootstrapped component is an entry component that Angular loads into the DOM during the bootstrap process (application launch). Other entry components are loaded dynamically by other means, such as with the router. Angular loads a root `AppComponent` dynamically because it's listed by type in `@[NgModule.bootstrap](../api/core/ngmodule#bootstrap)`. > A component can also be bootstrapped imperatively in the module's `ngDoBootstrap()` method. The `@[NgModule.bootstrap](../api/core/ngmodule#bootstrap)` property tells the compiler that this is an entry component and it should generate code to bootstrap the application with this component. > > A bootstrapped component is necessarily an entry component because bootstrapping is an imperative process, thus it needs to have an entry component. A routed entry component ------------------------ The second kind of entry component occurs in a route definition like this: ``` const routes: Routes = [ { path: '', component: CustomerListComponent } ]; ``` A route definition refers to a component by its type with `component: CustomerListComponent`. All router components must be entry components. Because this would require you to add the component in two places (router and `entryComponents`) the Compiler is smart enough to recognize that this is a router definition and automatically add the router component into `entryComponents`. The `entryComponents` array --------------------------- > Since 9.0.0 with Ivy, the `entryComponents` property is no longer necessary. See [deprecations guide](deprecations#entryComponents). > > Though the `@[NgModule](../api/core/ngmodule)` decorator has an `entryComponents` array, most of the time you won't have to explicitly set any entry components because Angular adds components listed in `@[NgModule.bootstrap](../api/core/ngmodule#bootstrap)` and those in route definitions to entry components automatically. Though these two mechanisms account for most entry components, if your application happens to bootstrap or dynamically load a component by type imperatively, you must add it to `entryComponents` explicitly. ### `entryComponents` and the compiler For production applications you want to load the smallest code possible. The code should contain only the classes that you actually need and exclude components that are never used. For this reason, the Angular compiler only generates code for components which are reachable from the `entryComponents`; This means that adding more references to `@[NgModule.declarations](../api/core/ngmodule#declarations)` does not imply that they will necessarily be included in the final bundle. In fact, many libraries declare and export components you'll never use. For example, a material design library will export all components because it doesn't know which ones you will use. However, it is unlikely that you will use them all. For the ones you don't reference, the tree shaker drops these components from the final code package. If a component isn't an *entry component* and isn't found in a template, the tree shaker will throw it away. So, it's best to add only the components that are truly entry components to help keep your app as trim as possible. More on Angular modules ----------------------- You may also be interested in the following: * [Types of NgModules](module-types) * [Lazy Loading Modules with the Angular Router](lazy-loading-ngmodules) * [Providers](providers) * [NgModules FAQ](ngmodule-faq) Last reviewed on Mon Feb 28 2022
programming_docs
angular Build and test a documentation update Build and test a documentation update ===================================== After you have completed your documentation update, you want to run the documentation's end-to-end tests on your local computer. These tests are some of the tests that are run after you open a pull request. You can find end-to-end test failures faster when you run them on your local computer than after you open a pull request. Build the documentation on your local computer ---------------------------------------------- Before you test your updated documentation, you want to build it to make sure you test your latest changes. #### To build the documentation on your local computer Perform these steps from a command-line tool on your local computer or in the **terminal** pane of your IDE. 1. Navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). 2. From your working directory, run this command to navigate to the `aio` directory. The `aio` directory contains Angular's documentation files and tools. ``` cd aio ``` 3. Run this command to build the documentation locally. ``` yarn build ``` This builds the documentation from scratch. After you build the documentation on your local computer, you can run the angular.io end-to-end test. Run the angular.io end-to-end test on your local computer --------------------------------------------------------- This procedure runs most, but not all, of the tests that are run after you open a pull request. #### To run the angular.io end-to-end test on your local computer On your local computer, in a command line tool or the **Terminal** pane of your IDE: 1. Run this command from your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory to navigate to your [working directory](doc-prepare-to-edit#doc-working-directory). ``` cd personal/angular ``` 2. Replace `working-branch` with the name of your `working` branch and run this command to check out your `working` branch. ``` git checkout working-branch ``` 3. Run this command to navigate to the documentation. ``` cd aio ``` 4. Run these commands to run the end-to-end tests. ``` yarn e2e yarn docs-test ``` 5. Watch for errors that the test might report. No errors reported ------------------ If the end-to-end tests report no errors and your update has passed [all other reviews](doc-editing#test-your-documentation) required, your documentation update is ready for a pull request. After you open your pull request, GitHub tests the code in your pull request. The tests that GitHub runs include the end-to-end tests that you just ran and other tests that only run in the GitHub repo. Because of that, even though your update passed the end-to-end tests locally, your update could still report an error after you open a pull request. Errors reported --------------- If the end-to-end tests report an error on your local computer, be sure to correct it before you open a pull request. If the update fails the end-to-end test locally, it is likely to also fail the tests that run after you open a pull request. Last reviewed on Wed Oct 12 2022 angular Next steps: tools and techniques Next steps: tools and techniques ================================ After you understand the basic Angular building blocks, you can learn more about the features and tools that can help you develop and deliver Angular applications. * Work through the [Tour of Heroes](tutorial) tutorial to get a feel for how to fit the basic building blocks together to create a well-designed application. * Check out the [Glossary](glossary) to understand Angular-specific terms and usage. * Use the documentation to learn about key features in more depth, according to your stage of development and areas of interest. Application architecture ------------------------ * The **Main Concepts** section located in the table of contents contains several topics that explain how to connect the application data in your [components](glossary#component) to your page-display [templates](glossary#template), to create a complete interactive application. * The [NgModules](ngmodules) guide provides in-depth information on the modular structure of an Angular application. * The [Routing and navigation](router) guide provides in-depth information on how to construct applications that allow a user to navigate to different [views](glossary#view) within your single-page application. * The [Dependency injection](dependency-injection) guide provides in-depth information on how to construct an application such that each component class can acquire the services and objects it needs to perform its function. Responsive programming ---------------------- The [template syntax](template-syntax) and related topics contain details about how to display your component data when and where you want it within a view, and how to collect input from users that you can respond to. Additional pages and sections describe some basic programming techniques for Angular applications. * [Lifecycle hooks](lifecycle-hooks): Tap into key moments in the lifetime of a component, from its creation to its destruction, by implementing the lifecycle hook interfaces. * [Observables and event processing](observables): How to use observables with components and services to publish and subscribe to messages of any type, such as user-interaction events and asynchronous operation results. * [Angular elements](elements): How to package components as *custom elements* using Web Components, a web standard for defining new HTML elements in a framework-agnostic way. * [Forms](forms-overview): Support complex data entry scenarios with HTML-based input validation. * [Animations](animations): Use Angular's animation library to animate component behavior without deep knowledge of animation techniques or CSS. Client-server interaction ------------------------- Angular provides a framework for single-page applications, where most of the logic and data resides on the client. Most applications still need to access a server using the `[HttpClient](../api/common/http/httpclient)` to access and save data. For some platforms and applications, you might also want to use the PWA (Progressive Web App) model to improve the user experience. * [HTTP](http): Communicate with a server to get data, save data, and invoke server-side actions with an HTTP client. * [Server-side rendering](universal): Angular Universal generates static application pages on the server through server-side rendering (SSR). This allows you to run your Angular application on the server in order to improve performance and show the first page quickly on mobile and low-powered devices, and also facilitate web crawlers. * [Service workers and PWA](service-worker-intro): Use a service worker to reduce dependency on the network and significantly improve the user experience. * [Web workers](web-worker): Learn how to run CPU-intensive computations in a background thread. Support for the development cycle --------------------------------- * [CLI Command Reference](cli): The Angular CLI is a command-line tool that you use to create projects, generate application and library code, and perform a variety of ongoing development tasks such as testing, bundling, and deployment. * [Compilation](aot-compiler): Angular provides just-in-time (JIT) compilation for the development environment, and ahead-of-time (AOT) compilation for the production environment. * [Testing platform](testing): Run unit tests on your application parts as they interact with the Angular framework. * [Deployment](deployment): Learn techniques for deploying your Angular application to a remote server. * [Security guidelines](security): Learn about Angular's built-in protections against common web-application vulnerabilities and attacks such as cross-site scripting attacks. * [Internationalization](i18n-overview "Angular Internationalization | Angular"): Make your application available in multiple languages with Angular's internationalization (i18n) tools. * [Accessibility](accessibility): Make your application accessible to all users. File structure, configuration, and dependencies ----------------------------------------------- * [Workspace and file structure](file-structure): Understand the structure of Angular workspace and project folders. * [Building and serving](build): Learn to define different build and proxy server configurations for your project, such as development, staging, and production. * [npm packages](npm-packages): The Angular Framework, Angular CLI, and components used by Angular applications are packaged as [npm](https://docs.npmjs.com) packages and distributed using the npm registry. The Angular CLI creates a default `package.json` file, which specifies a starter set of packages that work well together and jointly support many common application scenarios. * [TypeScript configuration](typescript-configuration): TypeScript is the primary language for Angular application development. * [Browser support](browser-support): Make your applications compatible across a wide range of browsers. Extending Angular ----------------- * [Angular libraries](libraries): Learn about using and creating re-usable libraries. * [Schematics](schematics): Learn about customizing and extending the CLI's generation capabilities. * [CLI builders](cli-builder): Learn about customizing and extending the CLI's ability to apply tools to perform complex tasks, such as building and testing applications. Last reviewed on Mon Feb 28 2022 angular Angular Routing Angular Routing =============== In a single-page app, you change what the user sees by showing or hiding portions of the display that correspond to particular components, rather than going out to the server to get a new page. As users perform application tasks, they need to move between the different [views](glossary#view "Definition of view") that you have defined. To handle the navigation from one [view](glossary#view) to the next, you use the Angular **`[Router](../api/router/router)`**. The **`[Router](../api/router/router)`** enables navigation by interpreting a browser URL as an instruction to change the view. To explore a sample app featuring the router's primary features, see the . Prerequisites ------------- Before creating a route, you should be familiar with the following: * [Basics of components](architecture-components) * [Basics of templates](glossary#template) * An Angular app —you can generate a basic Angular application using the [Angular CLI](cli). Learn about Angular routing --------------------------- [Common routing tasks Learn how to implement many of the common tasks associated with Angular routing. Common routing tasks](router "Common routing tasks") [Single-page applications (SPAs) routing tutorial A tutorial that covers patterns associated with Angular routing. Routing SPA tutorial](router-tutorial "Routing SPA tutorial") [Tour of Heroes expanded routing tutorial Add more routing features to the Tour of Heroes tutorial. Routing Tour of Heroes](router-tutorial-toh "Routing Tour of Heroes") [Creating custom route matches tutorial A tutorial that covers how to use custom matching strategy patterns with Angular routing. Custom route matches tutorial](routing-with-urlmatcher "Creating custom route matches tutorial") [Router reference Describes some core router API concepts. Router reference](router-reference "Router reference") Last reviewed on Mon Feb 28 2022 angular Two-way binding Two-way binding =============== Two-way binding gives components in your application a way to share data. Use two-way binding to listen for events and update values simultaneously between parent and child components. > See the live example for a working example containing the code snippets in this guide. > > Prerequisites ------------- To get the most out of two-way binding, you should have a basic understanding of the following concepts: * [Property binding](property-binding) * [Event binding](event-binding) * [Inputs and Outputs](inputs-outputs) Two-way binding combines property binding with event binding: | Bindings | Details | | --- | --- | | [Property binding](property-binding) | Sets a specific element property. | | [Event binding](event-binding) | Listens for an element change event. | Adding two-way data binding --------------------------- Angular's two-way binding syntax is a combination of square brackets and parentheses, `[()]`. The `[()]` syntax combines the brackets of property binding, `[]`, with the parentheses of event binding, `()`, as follows. ``` <app-sizer [(size)]="fontSizePx"></app-sizer> ``` How two-way binding works ------------------------- For two-way data binding to work, the `@[Output](../api/core/output)()` property must use the pattern, `inputChange`, where `input` is the name of the `@[Input](../api/core/input)()` property. For example, if the `@[Input](../api/core/input)()` property is `size`, the `@[Output](../api/core/output)()` property must be `sizeChange`. The following `sizerComponent` has a `size` value property and a `sizeChange` event. The `size` property is an `@[Input](../api/core/input)()`, so data can flow into the `sizerComponent`. The `sizeChange` event is an `@[Output](../api/core/output)()`, which lets data flow out of the `sizerComponent` to the parent component. Next, there are two methods, `dec()` to decrease the font size and `inc()` to increase the font size. These two methods use `resize()` to change the value of the `size` property within min/max value constraints, and to emit an event that conveys the new `size` value. ``` export class SizerComponent { @Input() size!: number | string; @Output() sizeChange = new EventEmitter<number>(); dec() { this.resize(-1); } inc() { this.resize(+1); } resize(delta: number) { this.size = Math.min(40, Math.max(8, +this.size + delta)); this.sizeChange.emit(this.size); } } ``` The `sizerComponent` template has two buttons that each bind the click event to the `inc()` and `dec()` methods. When the user clicks one of the buttons, the `sizerComponent` calls the corresponding method. Both methods, `inc()` and `dec()`, call the `resize()` method with a `+1` or `-1`, which in turn raises the `sizeChange` event with the new size value. ``` <div> <button type="button" (click)="dec()" title="smaller">-</button> <button type="button" (click)="inc()" title="bigger">+</button> <span [style.font-size.px]="size">FontSize: {{size}}px</span> </div> ``` In the `AppComponent` template, `fontSizePx` is two-way bound to the `SizerComponent`. ``` <app-sizer [(size)]="fontSizePx"></app-sizer> <div [style.font-size.px]="fontSizePx">Resizable Text</div> ``` In the `AppComponent`, `fontSizePx` establishes the initial `SizerComponent.size` value by setting the value to `16`. ``` fontSizePx = 16; ``` Clicking the buttons updates the `AppComponent.fontSizePx`. The revised `AppComponent.fontSizePx` value updates the style binding, which makes the displayed text bigger or smaller. The two-way binding syntax is shorthand for a combination of property binding and event binding. The `SizerComponent` binding as separate property binding and event binding is as follows. ``` <app-sizer [size]="fontSizePx" (sizeChange)="fontSizePx=$event"></app-sizer> ``` The `$event` variable contains the data of the `SizerComponent.sizeChange` event. Angular assigns the `$event` value to the `AppComponent.fontSizePx` when the user clicks the buttons. Because no built-in HTML element follows the `x` value and `xChange` event pattern, two-way binding with form elements requires `[NgModel](../api/forms/ngmodel)`. For more information on how to use two-way binding in forms, see Angular [NgModel](built-in-directives#ngModel). Last reviewed on Mon Feb 28 2022 angular Complex animation sequences Complex animation sequences =========================== Prerequisites ------------- A basic understanding of the following concepts: * [Introduction to Angular animations](animations) * [Transition and triggers](transition-and-triggers) So far, we've learned simple animations of single HTML elements. Angular also lets you animate coordinated sequences, such as an entire grid or list of elements as they enter and leave a page. You can choose to run multiple animations in parallel, or run discrete animations sequentially, one following another. The functions that control complex animation sequences are: | Functions | Details | | --- | --- | | `[query](../api/animations/query)()` | Finds one or more inner HTML elements. | | `[stagger](../api/animations/stagger)()` | Applies a cascading delay to animations for multiple elements. | | [`group()`](../api/animations/group) | Runs multiple animation steps in parallel. | | `[sequence](../api/animations/sequence)()` | Runs animation steps one after another. | The query() function -------------------- Most complex animations rely on the `[query](../api/animations/query)()` function to find child elements and apply animations to them, basic examples of such are: | Examples | Details | | --- | --- | | `[query](../api/animations/query)()` followed by `[animate](../api/animations/animate)()` | Used to query simple HTML elements and directly apply animations to them. | | `[query](../api/animations/query)()` followed by `[animateChild](../api/animations/animatechild)()` | Used to query child elements, which themselves have animations metadata applied to them and trigger such animation (which would be otherwise be blocked by the current/parent element's animation). | The first argument of `[query](../api/animations/query)()` is a [css selector](https://developer.mozilla.org/docs/Web/CSS/CSS_Selectors) string which can also contain the following Angular-specific tokens: | Tokens | Details | | --- | --- | | `:enter` `:leave` | For entering/leaving elements. | | `:animating` | For elements currently animating. | | `@*` `@triggerName` | For elements with any—or a specific—trigger. | | `:self` | The animating element itself. | Not all child elements are actually considered as entering/leaving; this can, at times, be counterintuitive and confusing. Please see the [query api docs](../api/animations/query#entering-and-leaving-elements) for more information. You can also see an illustration of this in the animations live example (introduced in the animations [introduction section](animations#about-this-guide)) under the Querying tab. Animate multiple elements using query() and stagger() functions --------------------------------------------------------------- After having queried child elements via `[query](../api/animations/query)()`, the `[stagger](../api/animations/stagger)()` function lets you define a timing gap between each queried item that is animated and thus animates elements with a delay between them. The following example demonstrates how to use the `[query](../api/animations/query)()` and `[stagger](../api/animations/stagger)()` functions to animate a list (of heroes) adding each in sequence, with a slight delay, from top to bottom. * Use `[query](../api/animations/query)()` to look for an element entering the page that meets certain criteria * For each of these elements, use `[style](../api/animations/style)()` to set the same initial style for the element. Make it transparent and use `transform` to move it out of position so that it can slide into place. * Use `[stagger](../api/animations/stagger)()` to delay each animation by 30 milliseconds * Animate each element on screen for 0.5 seconds using a custom-defined easing curve, simultaneously fading it in and un-transforming it ``` animations: [ trigger('pageAnimations', [ transition(':enter', [ query('.hero', [ style({opacity: 0, transform: 'translateY(-100px)'}), stagger(30, [ animate('500ms cubic-bezier(0.35, 0, 0.25, 1)', style({ opacity: 1, transform: 'none' })) ]) ]) ]) ]), ``` Parallel animation using group() function ----------------------------------------- You've seen how to add a delay between each successive animation. But you might also want to configure animations that happen in parallel. For example, you might want to animate two CSS properties of the same element but use a different `easing` function for each one. For this, you can use the animation [`group()`](../api/animations/group) function. > **NOTE**: The [`group()`](../api/animations/group) function is used to group animation *steps*, rather than animated elements. > > The following example uses [`group()`](../api/animations/group)s on both `:enter` and `:leave` for two different timing configurations, thus applying two independent animations to the same element in parallel. ``` animations: [ trigger('flyInOut', [ state('in', style({ width: '*', transform: 'translateX(0)', opacity: 1 })), transition(':enter', [ style({ width: 10, transform: 'translateX(50px)', opacity: 0 }), group([ animate('0.3s 0.1s ease', style({ transform: 'translateX(0)', width: '*' })), animate('0.3s ease', style({ opacity: 1 })) ]) ]), transition(':leave', [ group([ animate('0.3s ease', style({ transform: 'translateX(50px)', width: 10 })), animate('0.3s 0.2s ease', style({ opacity: 0 })) ]) ]) ]) ] ``` Sequential vs. parallel animations ---------------------------------- Complex animations can have many things happening at once. But what if you want to create an animation involving several animations happening one after the other? Earlier you used [`group()`](../api/animations/group) to run multiple animations all at the same time, in parallel. A second function called `[sequence](../api/animations/sequence)()` lets you run those same animations one after the other. Within `[sequence](../api/animations/sequence)()`, the animation steps consist of either `[style](../api/animations/style)()` or `[animate](../api/animations/animate)()` function calls. * Use `[style](../api/animations/style)()` to apply the provided styling data immediately. * Use `[animate](../api/animations/animate)()` to apply styling data over a given time interval. Filter animation example ------------------------ Take a look at another animation on the live example page. Under the Filter/Stagger tab, enter some text into the **Search Heroes** text box, such as `Magnet` or `tornado`. The filter works in real time as you type. Elements leave the page as you type each new letter and the filter gets progressively stricter. The heroes list gradually re-enters the page as you delete each letter in the filter box. The HTML template contains a trigger called `filterAnimation`. ``` <label for="search">Search heroes: </label> <input type="text" id="search" #criteria (input)="updateCriteria(criteria.value)" placeholder="Search heroes"> <ul class="heroes" [@filterAnimation]="heroesTotal"> <li *ngFor="let hero of heroes" class="hero"> <div class="inner"> <span class="badge">{{ hero.id }}</span> <span class="name">{{ hero.name }}</span> </div> </li> </ul> ``` The `filterAnimation` in the component's decorator contains three transitions. ``` @Component({ animations: [ trigger('filterAnimation', [ transition(':enter, * => 0, * => -1', []), transition(':increment', [ query(':enter', [ style({ opacity: 0, width: 0 }), stagger(50, [ animate('300ms ease-out', style({ opacity: 1, width: '*' })), ]), ], { optional: true }) ]), transition(':decrement', [ query(':leave', [ stagger(50, [ animate('300ms ease-out', style({ opacity: 0, width: 0 })), ]), ]) ]), ]), ] }) export class HeroListPageComponent implements OnInit { heroesTotal = -1; get heroes() { return this._heroes; } private _heroes: Hero[] = []; ngOnInit() { this._heroes = HEROES; } updateCriteria(criteria: string) { criteria = criteria ? criteria.trim() : ''; this._heroes = HEROES.filter(hero => hero.name.toLowerCase().includes(criteria.toLowerCase())); const newTotal = this.heroes.length; if (this.heroesTotal !== newTotal) { this.heroesTotal = newTotal; } else if (!criteria) { this.heroesTotal = -1; } } } ``` The code in this example performs the following tasks: * Skips animations when the user first opens or navigates to this page (the filter animation narrows what is already there, so it only works on elements that already exist in the DOM) * Filters heroes based on the search input's value For each change: * Hides an element leaving the DOM by setting its opacity and width to 0 * Animates an element entering the DOM over 300 milliseconds. During the animation, the element assumes its default width and opacity. * If there are multiple elements entering or leaving the DOM, staggers each animation starting at the top of the page, with a 50-millisecond delay between each element Animating the items of a reordering list ---------------------------------------- Although Angular animates correctly `*[ngFor](../api/common/ngfor)` list items out of the box, it will not be able to do so if their ordering changes. This is because it will lose track of which element is which, resulting in broken animations. The only way to help Angular keep track of such elements is by assigning a `[TrackByFunction](../api/core/trackbyfunction)` to the `[NgForOf](../api/common/ngforof)` directive. This makes sure that Angular always knows which element is which, thus allowing it to apply the correct animations to the correct elements all the time. > **IMPORTANT**: If you need to animate the items of an `*[ngFor](../api/common/ngfor)` list and there is a possibility that the order of such items will change during runtime, always use a `[TrackByFunction](../api/core/trackbyfunction)`. > > Animations and Component View Encapsulation ------------------------------------------- Angular animations are based on the components DOM structure and do not directly take [View Encapsulation](view-encapsulation) into account, this means that components using `[ViewEncapsulation.Emulated](../api/core/viewencapsulation#Emulated)` behave exactly as if they were using `[ViewEncapsulation.None](../api/core/viewencapsulation#None)` (`[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)` behaves differently as we'll discuss shortly). For example if the `[query](../api/animations/query)()` function (which you'll see more of in the rest of the Animations guide) were to be applied at the top of a tree of components using the emulated view encapsulation, such query would be able to identify (and thus animate) DOM elements on any depth of the tree. On the other hand the `[ViewEncapsulation.ShadowDom](../api/core/viewencapsulation#ShadowDom)` changes the component's DOM structure by "hiding" DOM elements inside [`ShadowRoot`](https://developer.mozilla.org/en-US/docs/Web/API/ShadowRoot) elements. Such DOM manipulations do prevent some of the animations implementation to work properly since it relies on simple DOM structures and doesn't take `ShadowRoot` elements into account. Therefore it is advised to avoid applying animations to views incorporating components using the ShadowDom view encapsulation. Animation sequence summary -------------------------- Angular functions for animating multiple elements start with `[query](../api/animations/query)()` to find inner elements; for example, gathering all images within a `<div>`. The remaining functions, `[stagger](../api/animations/stagger)()`, [`group()`](../api/animations/group), and `[sequence](../api/animations/sequence)()`, apply cascades or let you control how multiple animation steps are applied. More on Angular animations -------------------------- You might also be interested in the following: * [Introduction to Angular animations](animations) * [Transition and triggers](transition-and-triggers) * [Reusable animations](reusable-animations) * [Route transition animations](route-animations) Last reviewed on Mon Feb 28 2022
programming_docs
angular Reactive forms Reactive forms ============== Reactive forms provide a model-driven approach to handling form inputs whose values change over time. This guide shows you how to create and update a basic form control, progress to using multiple controls in a group, validate form values, and create dynamic forms where you can add or remove controls at run time. > Try this Reactive Forms live-example. > > Prerequisites ------------- Before going further into reactive forms, you should have a basic understanding of the following: * [TypeScript](https://www.typescriptlang.org/ "The TypeScript language") programming * Angular application-design fundamentals, as described in [Angular Concepts](architecture "Introduction to Angular concepts") * The form-design concepts that are presented in [Introduction to Forms](forms-overview "Overview of Angular forms") Overview of reactive forms -------------------------- Reactive forms use an explicit and immutable approach to managing the state of a form at a given point in time. Each change to the form state returns a new state, which maintains the integrity of the model between changes. Reactive forms are built around [observable](glossary#observable "Observable definition") streams, where form inputs and values are provided as streams of input values, which can be accessed synchronously. Reactive forms also provide a straightforward path to testing because you are assured that your data is consistent and predictable when requested. Any consumers of the streams have access to manipulate that data safely. Reactive forms differ from [template-driven forms](forms "Template-driven forms guide") in distinct ways. Reactive forms provide synchronous access to the data model, immutability with observable operators, and change tracking through observable streams. Template-driven forms let direct access modify data in your template, but are less explicit than reactive forms because they rely on directives embedded in the template, along with mutable data to track changes asynchronously. See the [Forms Overview](forms-overview "Overview of Angular forms") for detailed comparisons between the two paradigms. Adding a basic form control --------------------------- There are three steps to using form controls. 1. Register the reactive forms module in your application. This module declares the reactive-form directives that you need to use reactive forms. 2. Generate a new component and instantiate a new `[FormControl](../api/forms/formcontrol)`. 3. Register the `[FormControl](../api/forms/formcontrol)` in the template. You can then display the form by adding the component to the template. The following examples show how to add a single form control. In the example, the user enters their name into an input field, captures that input value, and displays the current value of the form control element. | Action | Details | | --- | --- | | Register the reactive forms module | To use reactive form controls, import `[ReactiveFormsModule](../api/forms/reactiveformsmodule)` from the `@angular/forms` package and add it to your NgModule's `imports` array. ``` import { ReactiveFormsModule } from '@angular/forms'; @NgModule({ imports: [ // other imports ... ReactiveFormsModule ], }) export class AppModule { } ``` | | Generate a new `[FormControl](../api/forms/formcontrol)` | Use the [CLI command](cli/generate#component-command "Using the Angular command-line interface") `ng generate` to generate a component in your project to host the control. ``` import { Component } from '@angular/core'; import { FormControl } from '@angular/forms'; @Component({ selector: 'app-name-editor', templateUrl: './name-editor.component.html', styleUrls: ['./name-editor.component.css'] }) export class NameEditorComponent { name = new FormControl(''); } ``` Use the constructor of `[FormControl](../api/forms/formcontrol)` to set its initial value, which in this case is an empty string. By creating these controls in your component class, you get immediate access to listen for, update, and validate the state of the form input. | | Register the control in the template | After you create the control in the component class, you must associate it with a form control element in the template. Update the template with the form control using the `formControl` binding provided by `[FormControlDirective](../api/forms/formcontroldirective)`, which is also included in the `[ReactiveFormsModule](../api/forms/reactiveformsmodule)`. ``` <label for="name">Name: </label> <input id="name" type="text" [formControl]="name"> ``` * For a summary of the classes and directives provided by `[ReactiveFormsModule](../api/forms/reactiveformsmodule)`, see the following [Reactive forms API](reactive-forms#reactive-forms-api "API summary") section * For complete syntax details of these classes and directives, see the API reference documentation for the [Forms package](../api/forms "API reference") Using the template binding syntax, the form control is now registered to the `name` input element in the template. The form control and DOM element communicate with each other: the view reflects changes in the model, and the model reflects changes in the view. | | Display the component | The `[FormControl](../api/forms/formcontrol)` assigned to the `name` property is displayed when the property's host component is added to a template. ``` <app-name-editor></app-name-editor> ``` | ### Displaying a form control value You can display the value in the following ways. * Through the `valueChanges` observable where you can listen for changes in the form's value in the template using `[AsyncPipe](../api/common/asyncpipe)` or in the component class using the `subscribe()` method * With the `value` property, which gives you a snapshot of the current value The following example shows you how to display the current value using interpolation in the template. ``` <p>Value: {{ name.value }}</p> ``` The displayed value changes as you update the form control element. Reactive forms provide access to information about a given control through properties and methods provided with each instance. These properties and methods of the underlying [AbstractControl](../api/forms/abstractcontrol "API reference") class are used to control form state and determine when to display messages when handling [input validation](reactive-forms#basic-form-validation "Learn more about validating form input"). Read about other `[FormControl](../api/forms/formcontrol)` properties and methods in the [API Reference](../api/forms/formcontrol "Detailed syntax reference"). ### Replacing a form control value Reactive forms have methods to change a control's value programmatically, which gives you the flexibility to update the value without user interaction. A form control instance provides a `setValue()` method that updates the value of the form control and validates the structure of the value provided against the control's structure. For example, when retrieving form data from a backend API or service, use the `setValue()` method to update the control to its new value, replacing the old value entirely. The following example adds a method to the component class to update the value of the control to *Nancy* using the `setValue()` method. ``` updateName() { this.name.setValue('Nancy'); } ``` Update the template with a button to simulate a name update. When you click the **Update Name** button, the value entered in the form control element is reflected as its current value. ``` <button type="button" (click)="updateName()">Update Name</button> ``` The form model is the source of truth for the control, so when you click the button, the value of the input is changed within the component class, overriding its current value. > **NOTE**: In this example, you're using a single control. When using the `setValue()` method with a [form group](reactive-forms#grouping-form-controls "Learn more about form groups") or [form array](reactive-forms#creating-dynamic-forms "Learn more about dynamic forms") instance, the value needs to match the structure of the group or array. > > Grouping form controls ---------------------- Forms typically contain several related controls. Reactive forms provide two ways of grouping multiple related controls into a single input form. | Form groups | Details | | --- | --- | | Form group | Defines a form with a fixed set of controls that you can manage together. Form group basics are discussed in this section. You can also [nest form groups](reactive-forms#nested-groups "See more about nesting groups") to create more complex forms. | | Form array | Defines a dynamic form, where you can add and remove controls at run time. You can also nest form arrays to create more complex forms. For more about this option, see [Creating dynamic forms](reactive-forms#dynamic-forms "See more about form arrays"). | Just as a form control instance gives you control over a single input field, a form group instance tracks the form state of a group of form control instances (for example, a form). Each control in a form group instance is tracked by name when creating the form group. The following example shows how to manage multiple form control instances in a single group. Generate a `ProfileEditor` component and import the `[FormGroup](../api/forms/formgroup)` and `[FormControl](../api/forms/formcontrol)` classes from the `@angular/forms` package. ``` ng generate component ProfileEditor ``` ``` import { FormGroup, FormControl } from '@angular/forms'; ``` To add a form group to this component, take the following steps. 1. Create a `[FormGroup](../api/forms/formgroup)` instance. 2. Associate the `[FormGroup](../api/forms/formgroup)` model and view. 3. Save the form data. | Action | Details | | --- | --- | | Create a `[FormGroup](../api/forms/formgroup)` instance | Create a property in the component class named `profileForm` and set the property to a new form group instance. To initialize the form group, provide the constructor with an object of named keys mapped to their control. For the profile form, add two form control instances with the names `firstName` and `lastName`. ``` import { Component } from '@angular/core'; import { FormGroup, FormControl } from '@angular/forms'; @Component({ selector: 'app-profile-editor', templateUrl: './profile-editor.component.html', styleUrls: ['./profile-editor.component.css'] }) export class ProfileEditorComponent { profileForm = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), }); } ``` The individual form controls are now collected within a group. A `[FormGroup](../api/forms/formgroup)` instance provides its model value as an object reduced from the values of each control in the group. A form group instance has the same properties (such as `value` and `untouched`) and methods (such as `setValue()`) as a form control instance. | | Associate the `[FormGroup](../api/forms/formgroup)` model and view | A form group tracks the status and changes for each of its controls, so if one of the controls changes, the parent control also emits a new status or value change. The model for the group is maintained from its members. After you define the model, you must update the template to reflect the model in the view. ``` <form [formGroup]="profileForm"> <label for="first-name">First Name: </label> <input id="first-name" type="text" formControlName="firstName"> <label for="last-name">Last Name: </label> <input id="last-name" type="text" formControlName="lastName"> </form> ``` **NOTE**: Just as a form group contains a group of controls, the *profileForm* `[FormGroup](../api/forms/formgroup)` is bound to the `form` element with the `[FormGroup](../api/forms/formgroup)` directive, creating a communication layer between the model and the form containing the inputs. The `[formControlName](../api/forms/formcontrolname)` input provided by the `[FormControlName](../api/forms/formcontrolname)` directive binds each individual input to the form control defined in `[FormGroup](../api/forms/formgroup)`. The form controls communicate with their respective elements. They also communicate changes to the form group instance, which provides the source of truth for the model value. | | Save form data | The `ProfileEditor` component accepts input from the user, but in a real scenario you want to capture the form value and make available for further processing outside the component. The `[FormGroup](../api/forms/formgroup)` directive listens for the `submit` event emitted by the `form` element and emits an `ngSubmit` event that you can bind to a callback function. Add an `ngSubmit` event listener to the `form` tag with the `onSubmit()` callback method. ``` <form [formGroup]="profileForm" (ngSubmit)="onSubmit()"> ``` The `onSubmit()` method in the `ProfileEditor` component captures the current value of `profileForm`. Use `[EventEmitter](../api/core/eventemitter)` to keep the form encapsulated and to provide the form value outside the component. The following example uses `console.warn` to log a message to the browser console. ``` onSubmit() { // TODO: Use EventEmitter with form value console.warn(this.profileForm.value); } ``` The `submit` event is emitted by the `form` tag using the built-in DOM event. You trigger the event by clicking a button with `submit` type. This lets the user press the **Enter** key to submit the completed form. Use a `button` element to add a button to the bottom of the form to trigger the form submission. ``` <p>Complete the form to enable button.</p> <button type="submit" [disabled]="!profileForm.valid">Submit</button> ``` **NOTE**: The button in the preceding snippet also has a `disabled` binding attached to it to disable the button when `profileForm` is invalid. You aren't performing any validation yet, so the button is always enabled. Basic form validation is covered in the [Validating form input](reactive-forms#basic-form-validation "Basic form validation.") section. | | Display the component | To display the `ProfileEditor` component that contains the form, add it to a component template. ``` <app-profile-editor></app-profile-editor> ``` `ProfileEditor` lets you manage the form control instances for the `firstName` and `lastName` controls within the form group instance. | ### Creating nested form groups Form groups can accept both individual form control instances and other form group instances as children. This makes composing complex form models easier to maintain and logically group together. When building complex forms, managing the different areas of information is easier in smaller sections. Using a nested form group instance lets you break large forms groups into smaller, more manageable ones. To make more complex forms, use the following steps. 1. Create a nested group. 2. Group the nested form in the template. Some types of information naturally fall into the same group. A name and address are typical examples of such nested groups, and are used in the following examples. | Action | Details | | --- | --- | | Create a nested group | To create a nested group in `profileForm`, add a nested `address` element to the form group instance. ``` import { Component } from '@angular/core'; import { FormGroup, FormControl } from '@angular/forms'; @Component({ selector: 'app-profile-editor', templateUrl: './profile-editor.component.html', styleUrls: ['./profile-editor.component.css'] }) export class ProfileEditorComponent { profileForm = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), address: new FormGroup({ street: new FormControl(''), city: new FormControl(''), state: new FormControl(''), zip: new FormControl('') }) }); } ``` In this example, `address group` combines the current `firstName` and `lastName` controls with the new `street`, `city`, `state`, and `zip` controls. Even though the `address` element in the form group is a child of the overall `profileForm` element in the form group, the same rules apply with value and status changes. Changes in status and value from the nested form group propagate to the parent form group, maintaining consistency with the overall model. | | Group the nested form in the template | After you update the model in the component class, update the template to connect the form group instance and its input elements. Add the `address` form group containing the `street`, `city`, `state`, and `zip` fields to the `ProfileEditor` template. ``` <div formGroupName="address"> <h2>Address</h2> <label for="street">Street: </label> <input id="street" type="text" formControlName="street"> <label for="city">City: </label> <input id="city" type="text" formControlName="city"> <label for="state">State: </label> <input id="state" type="text" formControlName="state"> <label for="zip">Zip Code: </label> <input id="zip" type="text" formControlName="zip"> </div> ``` The `ProfileEditor` form is displayed as one group, but the model is broken down further to represent the logical grouping areas. **TIP**: Display the value for the form group instance in the component template using the `value` property and `[JsonPipe](../api/common/jsonpipe)`. | ### Updating parts of the data model When updating the value for a form group instance that contains multiple controls, you might only want to update parts of the model. This section covers how to update specific parts of a form control data model. There are two ways to update the model value: | Methods | Details | | --- | --- | | `setValue()` | Set a new value for an individual control. The `setValue()` method strictly adheres to the structure of the form group and replaces the entire value for the control. | | `patchValue()` | Replace any properties defined in the object that have changed in the form model. | The strict checks of the `setValue()` method help catch nesting errors in complex forms, while `patchValue()` fails silently on those errors. In `ProfileEditorComponent`, use the `updateProfile` method with the following example to update the first name and street address for the user. ``` updateProfile() { this.profileForm.patchValue({ firstName: 'Nancy', address: { street: '123 Drew Street' } }); } ``` Simulate an update by adding a button to the template to update the user profile on demand. ``` <button type="button" (click)="updateProfile()">Update Profile</button> ``` When a user clicks the button, the `profileForm` model is updated with new values for `firstName` and `street`. Notice that `street` is provided in an object inside the `address` property. This is necessary because the `patchValue()` method applies the update against the model structure. `PatchValue()` only updates properties that the form model defines. Using the FormBuilder service to generate controls -------------------------------------------------- Creating form control instances manually can become repetitive when dealing with multiple forms. The `[FormBuilder](../api/forms/formbuilder)` service provides convenient methods for generating controls. Use the following steps to take advantage of this service. 1. Import the `[FormBuilder](../api/forms/formbuilder)` class. 2. Inject the `[FormBuilder](../api/forms/formbuilder)` service. 3. Generate the form contents. The following examples show how to refactor the `ProfileEditor` component to use the form builder service to create form control and form group instances. | Action | Details | | --- | --- | | Import the FormBuilder class | Import the `[FormBuilder](../api/forms/formbuilder)` class from the `@angular/forms` package. ``` import { FormBuilder } from '@angular/forms'; ``` | | Inject the FormBuilder service | The `[FormBuilder](../api/forms/formbuilder)` service is an injectable provider that is provided with the reactive forms module. Inject this dependency by adding it to the component constructor. ``` constructor(private fb: FormBuilder) { } ``` | | Generate form controls | The `[FormBuilder](../api/forms/formbuilder)` service has three methods: `control()`, `group()`, and `array()`. These are factory methods for generating instances in your component classes including form controls, form groups, and form arrays. Use the `group` method to create the `profileForm` controls. ``` import { Component } from '@angular/core'; import { FormBuilder } from '@angular/forms'; @Component({ selector: 'app-profile-editor', templateUrl: './profile-editor.component.html', styleUrls: ['./profile-editor.component.css'] }) export class ProfileEditorComponent { profileForm = this.fb.group({ firstName: [''], lastName: [''], address: this.fb.group({ street: [''], city: [''], state: [''], zip: [''] }), }); constructor(private fb: FormBuilder) { } } ``` In the preceding example, you use the `group()` method with the same object to define the properties in the model. The value for each control name is an array containing the initial value as the first item in the array. **TIP**: You can define the control with just the initial value, but if your controls need sync or async validation, add sync and async validators as the second and third items in the array. Compare using the form builder to creating the instances manually. ``` profileForm = new FormGroup({ firstName: new FormControl(''), lastName: new FormControl(''), address: new FormGroup({ street: new FormControl(''), city: new FormControl(''), state: new FormControl(''), zip: new FormControl('') }) }); ``` ``` profileForm = this.fb.group({ firstName: [''], lastName: [''], address: this.fb.group({ street: [''], city: [''], state: [''], zip: [''] }), }); ``` | Validating form input --------------------- *Form validation* is used to ensure that user input is complete and correct. This section covers adding a single validator to a form control and displaying the overall form status. Form validation is covered more extensively in the [Form Validation](form-validation "All about form validation") guide. Use the following steps to add form validation. 1. Import a validator function in your form component. 2. Add the validator to the field in the form. 3. Add logic to handle the validation status. The most common validation is making a field required. The following example shows how to add a required validation to the `firstName` control and display the result of validation. | Action | Details | | --- | --- | | Import a validator function | Reactive forms include a set of validator functions for common use cases. These functions receive a control to validate against and return an error object or a null value based on the validation check. Import the `[Validators](../api/forms/validators)` class from the `@angular/forms` package. ``` import { Validators } from '@angular/forms'; ``` | | Make a field required | In the `ProfileEditor` component, add the `Validators.required` static method as the second item in the array for the `firstName` control. ``` profileForm = this.fb.group({ firstName: ['', Validators.required], lastName: [''], address: this.fb.group({ street: [''], city: [''], state: [''], zip: [''] }), }); ``` | | Display form status | When you add a required field to the form control, its initial status is invalid. This invalid status propagates to the parent form group element, making its status invalid. Access the current status of the form group instance through its `status` property. Display the current status of `profileForm` using interpolation. ``` <p>Form Status: {{ profileForm.status }}</p> ``` The **Submit** button is disabled because `profileForm` is invalid due to the required `firstName` form control. After you fill out the `firstName` input, the form becomes valid and the **Submit** button is enabled. For more on form validation, visit the [Form Validation](form-validation "All about form validation") guide. | Creating dynamic forms ---------------------- `[FormArray](../api/forms/formarray)` is an alternative to `[FormGroup](../api/forms/formgroup)` for managing any number of unnamed controls. As with form group instances, you can dynamically insert and remove controls from form array instances, and the form array instance value and validation status is calculated from its child controls. However, you don't need to define a key for each control by name, so this is a great option if you don't know the number of child values in advance. To define a dynamic form, take the following steps. 1. Import the `[FormArray](../api/forms/formarray)` class. 2. Define a `[FormArray](../api/forms/formarray)` control. 3. Access the `[FormArray](../api/forms/formarray)` control with a getter method. 4. Display the form array in a template. The following example shows you how to manage an array of *aliases* in `ProfileEditor`. | Action | Details | | --- | --- | | Import the `[FormArray](../api/forms/formarray)` class | Import the `[FormArray](../api/forms/formarray)` class from `@angular/forms` to use for type information. The `[FormBuilder](../api/forms/formbuilder)` service is ready to create a `[FormArray](../api/forms/formarray)` instance. ``` import { FormArray } from '@angular/forms'; ``` | | Define a `[FormArray](../api/forms/formarray)` control | You can initialize a form array with any number of controls, from zero to many, by defining them in an array. Add an `aliases` property to the form group instance for `profileForm` to define the form array. Use the `[FormBuilder.array()](../api/forms/formbuilder#array)` method to define the array, and the `[FormBuilder.control()](../api/forms/formbuilder#control)` method to populate the array with an initial control. ``` profileForm = this.fb.group({ firstName: ['', Validators.required], lastName: [''], address: this.fb.group({ street: [''], city: [''], state: [''], zip: [''] }), aliases: this.fb.array([ this.fb.control('') ]) }); ``` The aliases control in the form group instance is now populated with a single control until more controls are added dynamically. | | Access the `[FormArray](../api/forms/formarray)` control | A getter provides access to the aliases in the form array instance compared to repeating the `profileForm.get()` method to get each instance. The form array instance represents an undefined number of controls in an array. It's convenient to access a control through a getter, and this approach is straightforward to repeat for additional controls. Use the getter syntax to create an `aliases` class property to retrieve the alias's form array control from the parent form group. ``` get aliases() { return this.profileForm.get('aliases') as FormArray; } ``` **NOTE**: Because the returned control is of the type `[AbstractControl](../api/forms/abstractcontrol)`, you need to provide an explicit type to access the method syntax for the form array instance. Define a method to dynamically insert an alias control into the alias's form array. The `[FormArray.push()](../api/forms/formarray#push)` method inserts the control as a new item in the array. ``` addAlias() { this.aliases.push(this.fb.control('')); } ``` In the template, each control is displayed as a separate input field. | | Display the form array in the template | To attach the aliases from your form model, you must add it to the template. Similar to the `[formGroupName](../api/forms/formgroupname)` input provided by `FormGroupNameDirective`, `[formArrayName](../api/forms/formarrayname)` binds communication from the form array instance to the template with `FormArrayNameDirective`. Add the following template HTML after the `<div>` closing the `[formGroupName](../api/forms/formgroupname)` element. ``` <div formArrayName="aliases"> <h2>Aliases</h2> <button type="button" (click)="addAlias()">+ Add another alias</button> <div *ngFor="let alias of aliases.controls; let i=index"> <!-- The repeated alias template --> <label for="alias-{{ i }}">Alias:</label> <input id="alias-{{ i }}" type="text" [formControlName]="i"> </div> </div> ``` The `*[ngFor](../api/common/ngfor)` directive iterates over each form control instance provided by the aliases form array instance. Because form array elements are unnamed, you assign the index to the `i` variable and pass it to each control to bind it to the `[formControlName](../api/forms/formcontrolname)` input. Each time a new alias instance is added, the new form array instance is provided its control based on the index. This lets you track each individual control when calculating the status and value of the root control. | | Add an alias | Initially, the form contains one `Alias` field. To add another field, click the **Add Alias** button. You can also validate the array of aliases reported by the form model displayed by `[Form](../api/forms/form) Value` at the bottom of the template. **NOTE**: Instead of a form control instance for each alias, you can compose another form group instance with additional fields. The process of defining a control for each item is the same. | Reactive forms API summary -------------------------- The following table lists the base classes and services used to create and manage reactive form controls. For complete syntax details, see the API reference documentation for the [Forms package](../api/forms "API reference"). #### Classes | Class | Details | | --- | --- | | `[AbstractControl](../api/forms/abstractcontrol)` | The abstract base class for the concrete form control classes `[FormControl](../api/forms/formcontrol)`, `[FormGroup](../api/forms/formgroup)`, and `[FormArray](../api/forms/formarray)`. It provides their common behaviors and properties. | | `[FormControl](../api/forms/formcontrol)` | Manages the value and validity status of an individual form control. It corresponds to an HTML form control such as `<input>` or `<select>`. | | `[FormGroup](../api/forms/formgroup)` | Manages the value and validity state of a group of `[AbstractControl](../api/forms/abstractcontrol)` instances. The group's properties include its child controls. The top-level form in your component is `[FormGroup](../api/forms/formgroup)`. | | `[FormArray](../api/forms/formarray)` | Manages the value and validity state of a numerically indexed array of `[AbstractControl](../api/forms/abstractcontrol)` instances. | | `[FormBuilder](../api/forms/formbuilder)` | An injectable service that provides factory methods for creating control instances. | | `[FormRecord](../api/forms/formrecord)` | Tracks the value and validity state of a collection of `[FormControl](../api/forms/formcontrol)` instances, each of which has the same value type. | #### Directives | Directive | Details | | --- | --- | | `[FormControlDirective](../api/forms/formcontroldirective)` | Syncs a standalone `[FormControl](../api/forms/formcontrol)` instance to a form control element. | | `[FormControlName](../api/forms/formcontrolname)` | Syncs `[FormControl](../api/forms/formcontrol)` in an existing `[FormGroup](../api/forms/formgroup)` instance to a form control element by name. | | `[FormGroupDirective](../api/forms/formgroupdirective)` | Syncs an existing `[FormGroup](../api/forms/formgroup)` instance to a DOM element. | | `[FormGroupName](../api/forms/formgroupname)` | Syncs a nested `[FormGroup](../api/forms/formgroup)` instance to a DOM element. | | `[FormArrayName](../api/forms/formarrayname)` | Syncs a nested `[FormArray](../api/forms/formarray)` instance to a DOM element. | Last reviewed on Mon Feb 28 2022
programming_docs
angular Singleton services Singleton services ================== A singleton service is a service for which only one instance exists in an application. For a sample application using the app-wide singleton service that this page describes, see the showcasing all the documented features of NgModules. Providing a singleton service ----------------------------- There are two ways to make a service a singleton in Angular: * Set the `providedIn` property of the `@[Injectable](../api/core/injectable)()` to `"root"` * Include the service in the `AppModule` or in a module that is only imported by the `AppModule` ### Using `providedIn` Beginning with Angular 6.0, the preferred way to create a singleton service is to set `providedIn` to `root` on the service's `@[Injectable](../api/core/injectable)()` decorator. This tells Angular to provide the service in the application root. ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root', }) export class UserService { } ``` For more detailed information on services, see the [Services](../tutorial/tour-of-heroes/toh-pt4) chapter of the [Tour of Heroes tutorial](../tutorial/tour-of-heroes). ### NgModule `providers` array In applications built with Angular versions prior to 6.0, services are registered NgModule `providers` arrays as follows: ``` @NgModule({ … providers: [UserService], … }) ``` If this NgModule were the root `AppModule`, the `UserService` would be a singleton and available throughout the application. Though you may see it coded this way, using the `providedIn` property of the `@[Injectable](../api/core/injectable)()` decorator on the service itself is preferable as of Angular 6.0 as it makes your services tree-shakable. The `forRoot()` pattern ----------------------- Generally, you'll only need `providedIn` for providing services and `forRoot()`/`forChild()` for routing. However, understanding how `forRoot()` works to make sure a service is a singleton will inform your development at a deeper level. If a module defines both providers and declarations (components, directives, pipes), then loading the module in multiple feature modules would duplicate the registration of the service. This could result in multiple service instances and the service would no longer behave as a singleton. There are multiple ways to prevent this: * Use the [`providedIn` syntax](singleton-services#providedIn) instead of registering the service in the module. * Separate your services into their own module. * Define `forRoot()` and `forChild()` methods in the module. > **NOTE**: There are two example applications where you can see this scenario; the more advanced NgModules live example, which contains `forRoot()` and `forChild()` in the routing modules and the `GreetingModule`, and the simpler Lazy Loading live example. For an introductory explanation see the [Lazy Loading Feature Modules](lazy-loading-ngmodules) guide. > > Use `forRoot()` to separate providers from a module so you can import that module into the root module with `providers` and child modules without `providers`. 1. Create a static method `forRoot()` on the module. 2. Place the providers into the `forRoot()` method. ``` static forRoot(config: UserServiceConfig): ModuleWithProviders<GreetingModule> { return { ngModule: GreetingModule, providers: [ {provide: UserServiceConfig, useValue: config } ] }; } ``` ### `forRoot()` and the `[Router](../api/router/router)` `[RouterModule](../api/router/routermodule)` provides the `[Router](../api/router/router)` service, as well as router directives, such as `[RouterOutlet](../api/router/routeroutlet)` and `[routerLink](../api/router/routerlink)`. The root application module imports `[RouterModule](../api/router/routermodule)` so that the application has a `[Router](../api/router/router)` and the root application components can access the router directives. Any feature modules must also import `[RouterModule](../api/router/routermodule)` so that their components can place router directives into their templates. If the `[RouterModule](../api/router/routermodule)` didn't have `forRoot()` then each feature module would instantiate a new `[Router](../api/router/router)` instance, which would break the application as there can only be one `[Router](../api/router/router)`. By using the `forRoot()` method, the root application module imports `RouterModule.forRoot(...)` and gets a `[Router](../api/router/router)`, and all feature modules import `RouterModule.forChild(...)` which does not instantiate another `[Router](../api/router/router)`. > **NOTE**: If you have a module which has both providers and declarations, you *can* use this technique to separate them out and you may see this pattern in legacy applications. However, since Angular 6.0, the best practice for providing services is with the `@[Injectable](../api/core/injectable)()` `providedIn` property. > > ### How `forRoot()` works `forRoot()` takes a service configuration object and returns a [ModuleWithProviders](../api/core/modulewithproviders), which is a simple object with the following properties: | Properties | Details | | --- | --- | | `ngModule` | In this example, the `GreetingModule` class | | `providers` | The configured providers | In the live example the root `AppModule` imports the `GreetingModule` and adds the `providers` to the `AppModule` providers. Specifically, Angular accumulates all imported providers before appending the items listed in `@[NgModule.providers](../api/core/ngmodule#providers)`. This sequence ensures that whatever you add explicitly to the `AppModule` providers takes precedence over the providers of imported modules. The sample application imports `GreetingModule` and uses its `forRoot()` method one time, in `AppModule`. Registering it once like this prevents multiple instances. You can also add a `forRoot()` method in the `GreetingModule` that configures the greeting `UserService`. In the following example, the optional, injected `UserServiceConfig` extends the greeting `UserService`. If a `UserServiceConfig` exists, the `UserService` sets the user name from that config. ``` constructor(@Optional() config?: UserServiceConfig) { if (config) { this._userName = config.userName; } } ``` Here's `forRoot()` that takes a `UserServiceConfig` object: ``` static forRoot(config: UserServiceConfig): ModuleWithProviders<GreetingModule> { return { ngModule: GreetingModule, providers: [ {provide: UserServiceConfig, useValue: config } ] }; } ``` Lastly, call it within the `imports` list of the `AppModule`. In the following snippet, other parts of the file are left out. For the complete file, see the , or continue to the next section of this document. ``` import { GreetingModule } from './greeting/greeting.module'; @NgModule({ imports: [ GreetingModule.forRoot({userName: 'Miss Marple'}), ], }) ``` The application displays "Miss Marple" as the user instead of the default "Sherlock Holmes". Remember to import `GreetingModule` as a Javascript import at the top of the file and don't add it to more than one `@[NgModule](../api/core/ngmodule)` `imports` list. Prevent reimport of the `GreetingModule` ---------------------------------------- Only the root `AppModule` should import the `GreetingModule`. If a lazy-loaded module imports it too, the application can generate [multiple instances](ngmodule-faq#q-why-bad) of a service. To guard against a lazy loaded module re-importing `GreetingModule`, add the following `GreetingModule` constructor. ``` constructor(@Optional() @SkipSelf() parentModule?: GreetingModule) { if (parentModule) { throw new Error( 'GreetingModule is already loaded. Import it in the AppModule only'); } } ``` The constructor tells Angular to inject the `GreetingModule` into itself. The injection would be circular if Angular looked for `GreetingModule` in the *current* injector, but the `@[SkipSelf](../api/core/skipself)()` decorator means "look for `GreetingModule` in an ancestor injector, above me in the injector hierarchy." By default, the injector throws an error when it can't find a requested provider. The `@[Optional](../api/core/optional)()` decorator means not finding the service is OK. The injector returns `null`, the `parentModule` parameter is null, and the constructor concludes uneventfully. It's a different story if you improperly import `GreetingModule` into a lazy loaded module such as `CustomersModule`. Angular creates a lazy loaded module with its own injector, a child of the root injector. `@[SkipSelf](../api/core/skipself)()` causes Angular to look for a `GreetingModule` in the parent injector, which this time is the root injector. Of course it finds the instance imported by the root `AppModule`. Now `parentModule` exists and the constructor throws the error. Here are the two files in their entirety for reference: ``` import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; /* App Root */ import { AppComponent } from './app.component'; /* Feature Modules */ import { ContactModule } from './contact/contact.module'; import { GreetingModule } from './greeting/greeting.module'; /* Routing Module */ import { AppRoutingModule } from './app-routing.module'; @NgModule({ imports: [ BrowserModule, ContactModule, GreetingModule.forRoot({userName: 'Miss Marple'}), AppRoutingModule ], declarations: [ AppComponent ], bootstrap: [AppComponent] }) export class AppModule { } ``` ``` import { ModuleWithProviders, NgModule, Optional, SkipSelf } from '@angular/core'; import { CommonModule } from '@angular/common'; import { GreetingComponent } from './greeting.component'; import { UserServiceConfig } from './user.service'; @NgModule({ imports: [ CommonModule ], declarations: [ GreetingComponent ], exports: [ GreetingComponent ] }) export class GreetingModule { constructor(@Optional() @SkipSelf() parentModule?: GreetingModule) { if (parentModule) { throw new Error( 'GreetingModule is already loaded. Import it in the AppModule only'); } } static forRoot(config: UserServiceConfig): ModuleWithProviders<GreetingModule> { return { ngModule: GreetingModule, providers: [ {provide: UserServiceConfig, useValue: config } ] }; } } ``` More on NgModules ----------------- You may also be interested in: * [Sharing Modules](sharing-ngmodules), which elaborates on the concepts covered on this page * [Lazy Loading Modules](lazy-loading-ngmodules) * [NgModule FAQ](ngmodule-faq) Last reviewed on Mon Feb 28 2022 angular AngularJS to Angular concepts: Quick reference AngularJS to Angular concepts: Quick reference ============================================== *Angular* is the name for the Angular of today and tomorrow. *AngularJS* is the name for all v1.x versions of Angular. This guide helps you transition from AngularJS to Angular by mapping AngularJS syntax to the corresponding Angular syntax. **See the Angular syntax in this** . Template basics --------------- Templates are the user-facing part of an Angular application and are written in HTML. The following table lists some of the key AngularJS template features with their corresponding Angular template syntax. ### Bindings / interpolation → bindings / interpolation | AngularJS | Angular | | --- | --- | | ``` Your favorite hero is: {{vm.favoriteHero}} ``` In AngularJS, an expression in curly braces denotes one-way binding. This binds the value of the element to a property in the controller associated with this template. When using the `controller as` syntax, the binding is prefixed with the controller alias `vm` or `$ctrl` because you have to be specific about the source. | ``` Your favorite hero is: {{favoriteHero}} ``` In Angular, a template expression in curly braces still denotes one-way binding. This binds the value of the element to a property of the component. The context of the binding is implied and is always the associated component, so it needs no reference variable. For more information, see the [Interpolation](interpolation "Text interpolation | Angular") guide. | ### Filters → pipes | AngularJS | Angular | | --- | --- | | ``` <td>   {{movie.title | uppercase}} </td> ``` To filter output in AngularJS templates, use the pipe `|` character and one or more filters. This example filters the `title` property to uppercase. | ``` <td>{{movie.title | uppercase}}</td> ``` In Angular you use similar syntax with the pipe `|` character to filter output, but now you call them **pipes**. Many, but not all, of the built-in filters from AngularJS are built-in pipes in Angular. For more information, see [Filters/pipes](ajs-quick-reference#filters--pipes "Filters/pipes - AngularJS to Angular concepts: Quick reference | Angular"). | ### Local variables → input variables | AngularJS | Angular | | --- | --- | | ``` <tr ng-repeat="movie in vm.movies">   <td>     {{movie.title}}   </td> </tr> ``` Here, `movie` is a user-defined local variable. | ``` <tr *ngFor="let movie of movies"> <td>{{movie.title}}</td> </tr> ``` Angular has true template input variables that are explicitly defined using the `let` keyword. For more information, see the [Structural directive shorthand](structural-directives#structural-directive-shorthand "Structural directive shorthand - Writing structural directives | Angular") section of [Structural Directives](structural-directives "Writing structural directives | Angular"). | Template directives ------------------- AngularJS provides more than seventy built-in directives for templates. Many of them are not needed in Angular because of its more capable and expressive binding system. The following are some of the key AngularJS built-in directives and their equivalents in Angular. ### `ng-app` → bootstrapping | AngularJS | Angular | | --- | --- | | ``` <body ng-app="movieHunter"> ``` The application startup process is called **bootstrapping**. Although you can bootstrap an AngularJS application in code, many applications bootstrap declaratively with the `ng-app` directive, giving it the name of the module (`movieHunter`) of the application. | ``` import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule) .catch(err => console.error(err)); ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` Angular does not have a bootstrap directive. To launch the application in code, explicitly bootstrap the root module (`AppModule`) of the application in `main.ts` and the root component (`AppComponent`) of the application in `app.module.ts`. | ### `ng-class` → `[ngClass](../api/common/ngclass)` | AngularJS | Angular | | --- | --- | | ``` <div ng-class="{active: isActive}"> <div ng-class="{active: isActive, shazam: isImportant}"> ``` In AngularJS, the `ng-class` directive includes/excludes CSS classes based on an expression. The expression is often a key-value object, with key defined as a CSS class name, and value as a template expression that evaluates to a Boolean. In the first example, the `active` class is applied to the element if `isActive` is true. You can specify multiple classes, as shown in the second example. | ``` <div [ngClass]="{'active': isActive}"> <div [ngClass]="{'active': isActive, 'shazam': isImportant}"> <div [class.active]="isActive"> ``` In Angular, the `[ngClass](../api/common/ngclass)` directive works similarly. It includes/excludes CSS classes based on an expression. In the first example, the `active` class is applied to the element if `isActive` is true. You can specify multiple classes, as shown in the second example. Angular also has **class binding**, which is a good way to add or remove a single class, as shown in the third example. For more information see [Attribute, class, and style bindings](attribute-binding "Attribute, class, and style bindings | Angular") page. | ### `ng-click` → Bind to the `click` event | AngularJS | Angular | | --- | --- | | ``` <button ng-click="vm.toggleImage()"> <button ng-click="vm.toggleImage($event)"> ``` In AngularJS, the `ng-click` directive allows you to specify custom behavior when an element is clicked. In the first example, when the user clicks the button, the `toggleImage()` method in the controller referenced by the `vm` `controller as` alias is executed. The second example demonstrates passing in the `$event` object, which provides details about the event to the controller. | ``` <button type="button" (click)="toggleImage()"> <button type="button" (click)="toggleImage($event)"> ``` AngularJS event-based directives do not exist in Angular. Rather, define one-way binding from the template view to the component using **event binding**. For event binding, define the name of the target event within parenthesis and specify a template statement, in quotes, to the right of the equals. Angular then sets up an event handler for the target event. When the event is raised, the handler executes the template statement. In the first example, when a user clicks the button, the `toggleImage()` method in the associated component is executed. The second example demonstrates passing in the `$event` object, which provides details about the event to the component. For a list of DOM events, see [Event reference](https://developer.mozilla.org/docs/Web/Events "Event reference | MDN"). For more information, see the [Event binding](event-binding "Event binding | Angular") page. | ### `ng-controller` → component decorator | AngularJS | Angular | | --- | --- | | ``` <div ng-controller="MovieListCtrl as vm"> ``` In AngularJS, the `ng-controller` directive attaches a controller to the view. Using the `ng-controller`, or defining the controller as part of the routing, ties the view to the controller code associated with that view. | ``` @Component({ selector: 'app-movie-list', templateUrl: './movie-list.component.html', styleUrls: [ './movie-list.component.css' ], }) ``` In Angular, the template no longer specifies its associated controller. Rather, the component specifies its associated template as part of the component class decorator. For more information, see [Architecture Overview](architecture#components "Components - Introduction to Angular concepts | Angular"). | ### `ng-hide` → Bind to the `hidden` property | AngularJS | Angular | | --- | --- | | In AngularJS, the `ng-hide` directive shows or hides the associated HTML element based on an expression. For more information, see [ng-show](ajs-quick-reference#template-directives "Template directives - AngularJS to Angular concepts: Quick reference | Angular"). | In Angular, you use property binding. Angular does not have a built-in *hide* directive. For more information, see [ng-show](ajs-quick-reference#template-directives "Template directives - AngularJS to Angular concepts: Quick reference | Angular"). | ### `ng-href` → Bind to the `href` property | AngularJS | Angular | | --- | --- | | ``` <a ng-href="{{ angularDocsUrl }}">   Angular Docs </a> ``` The `ng-href` directive allows AngularJS to preprocess the `href` property. `ng-href` can replace the binding expression with the appropriate URL before the browser fetches from that URL. In AngularJS, the `ng-href` is often used to activate a route as part of navigation. ``` <a ng-href="#{{ moviesHash }}">   Movies </a> ``` Routing is handled differently in Angular. | ``` <a [href]="angularDocsUrl">Angular Docs</a> ``` Angular uses property binding. Angular does not have a built-in *href* directive. Place the `href` property of the element in square brackets and set it to a quoted template expression. For more information see the [Property binding](property-binding "Property binding | Angular") page. In Angular, `href` is no longer used for routing. Routing uses `[routerLink](../api/router/routerlink)`, as shown in the following example. ``` <a [routerLink]="['/movies']">Movies</a> ``` For more information on routing, see [Defining a basic route](router#defining-a-basic-route "Defining a basic route - Common Routing Tasks | Angular") in the [Routing & Navigation](router "Common Routing Tasks | Angular") page. | ### `ng-if` → `*[ngIf](../api/common/ngif)` | AngularJS | Angular | | --- | --- | | ``` <table ng-if="movies.length"> ``` In AngularJS, the `ng-if` directive removes or recreates a section of the DOM, based on an expression. If the expression is false, the element is removed from the DOM. In this example, the `<table>` element is removed from the DOM unless the `movies` array has a length greater than zero. | ``` <table *ngIf="movies.length"> ``` The `*[ngIf](../api/common/ngif)` directive in Angular works the same as the `ng-if` directive in AngularJS. It removes or recreates a section of the DOM based on an expression. In this example, the `<table>` element is removed from the DOM unless the `movies` array has a length. The (`*`) before `[ngIf](../api/common/ngif)` is required in this example. For more information, see [Structural Directives](structural-directives "Writing structural directives | Angular"). | ### `ng-model` → `[ngModel](../api/forms/ngmodel)` | AngularJS | Angular | | --- | --- | | ``` <input ng-model="vm.favoriteHero" /> ``` In AngularJS, the `ng-model` directive binds a form control to a property in the controller associated with the template. This provides **two-way binding** whereby changes result in the value in the view and the model being synchronized. | ``` <input [(ngModel)]="favoriteHero" /> ``` In Angular, **two-way binding** is indicatedr5t by `[()]`, descriptively referred to as a "banana in a box." This syntax is a shortcut for defining both:* property binding, from the component to the view * event binding, from the view to the component thereby providing two-way binding. For more information on two-way binding with `[ngModel](../api/forms/ngmodel)`, see the [Displaying and updating properties with `ngModel`](built-in-directives#displaying-and-updating-properties-with-ngmodel "Displaying and updating properties with ngModel - Built-in directives | Angular") section of [Built-in directives](built-in-directives "Built-in directives | Angular"). | ### `ng-repeat` → `*[ngFor](../api/common/ngfor)` | AngularJS | Angular | | --- | --- | | ``` <tr ng-repeat="movie in vm.movies"> ``` In AngularJS, the `ng-repeat` directive repeats the associated DOM element for each item in the specified collection. In this example, the table row (`<tr>`) element repeats for each movie object in the collection of movies. | ``` <tr *ngFor="let movie of movies"> ``` The `*[ngFor](../api/common/ngfor)` directive in Angular is like the `ng-repeat` directive in AngularJS. It repeats the associated DOM element for each item in the specified collection. More accurately, it turns the defined element (`<tr>` in this example) and its contents into a template and uses that template to instantiate a view for each item in the list. Notice the other syntax differences: * The (`*`) before `[ngFor](../api/common/ngfor)` is required * The `let` keyword identifies `movie` as an input variable * The list preposition is `of`, not `in` For more information, see [Structural Directives](structural-directives "Writing structural directives | Angular"). | ### `ng-show` → Bind to the `hidden` property | AngularJS | Angular | | --- | --- | | ``` <h3 ng-show="vm.favoriteHero">   Your favorite hero is: {{vm.favoriteHero}} </h3> ``` In AngularJS, the `ng-show` directive shows or hides the associated DOM element, based on an expression. In this example, the `<div>` element is shown if the `favoriteHero` variable is truthy. | ``` <h3 [hidden]="!favoriteHero"> Your favorite hero is: {{favoriteHero}} </h3> ``` Angular uses property binding. Angular has no built-in *show* directive. For hiding and showing elements, bind to the HTML `hidden` property. To conditionally display an element the `hidden` property of the element can be used. Place the `hidden` property in square brackets and set it to a quoted template expression that evaluates to the *opposite* of *show*. In this example, the `<div>` element is hidden if the `favoriteHero` variable is not truthy. For more information on property binding, see the [Property binding](property-binding "Property binding | Angular") page. | ### `ng-src` → Bind to the `src` property | AngularJS | Angular | | --- | --- | | ``` <img ng-src="{{movie.imageurl}}"> ``` The `ng-src` directive allows AngularJS to preprocess the `src` property. This replaces the binding expression with the appropriate URL before the browser fetches from that URL. | ``` <img [src]="movie.imageurl" [alt]="movie.title"> ``` Angular uses property binding. Angular has no built-in *src* directive. Place the `src` property in square brackets and set it to a quoted template expression. For more information on property binding, see the [Property binding](property-binding "Property binding | Angular") page. | ### `ng-style` → `[ngStyle](../api/common/ngstyle)` | AngularJS | Angular | | --- | --- | | ``` <div ng-style="{color: colorPreference}"> ``` In AngularJS, the `ng-style` directive sets a CSS style on an HTML element based on an expression. That expression is often a key-value control object with: * each key of the object defined as a CSS property * each value defined as an expression that evaluates to a value appropriate for the style In the example, the `color` style is set to the current value of the `colorPreference` variable. | ``` <div [ngStyle]="{'color': colorPreference}"> <div [style.color]="colorPreference"> ``` In Angular, the `[ngStyle](../api/common/ngstyle)` directive works similarly. It sets a CSS style on an HTML element based on an expression. In the first example, the `color` style is set to the current value of the `colorPreference` variable. Angular also has **style binding**, which is good way to set a single style. This is shown in the second example. For more information on style binding, see the [Style binding](class-binding "Class and style binding | Angular") section of the [Attribute binding](attribute-binding "Attribute, class, and style bindings | Angular") page. For more information on the `[ngStyle](../api/common/ngstyle)` directive, see the [NgStyle](built-in-directives#setting-inline-styles-with-ngstyle "Setting inline styles with NgStyle - Built-in directives | Angular") section of the [Built-in directives](built-in-directives "Built-in directives | Angular") page. | ### `ng-switch` → `[ngSwitch](../api/common/ngswitch)` | AngularJS | Angular | | --- | --- | | ``` <div ng-switch="vm.favoriteHero && vm.checkMovieHero(vm.favoriteHero)">   <div ng-switch-when="true">     Excellent choice.   </div>   <div ng-switch-when="false">     No movie, sorry.   </div>   <div ng-switch-default>     Please enter your favorite hero.   </div> </div> ``` In AngularJS, the `ng-switch` directive swaps the contents of an element by selecting one of the templates based on the current value of an expression. In this example, if `favoriteHero` is not set, the template displays "Please enter …" If `favoriteHero` is set, it checks the movie hero by calling a controller method. If that method returns `true`, the template displays "Excellent choice!" If that methods returns `false`, the template displays "No movie, sorry!" | ``` <span [ngSwitch]="favoriteHero && checkMovieHero(favoriteHero)"> <p *ngSwitchCase="true"> Excellent choice! </p> <p *ngSwitchCase="false"> No movie, sorry! </p> <p *ngSwitchDefault> Please enter your favorite hero. </p> </span> ``` In Angular, the `[ngSwitch](../api/common/ngswitch)` directive works similarly. It displays an element whose `*[ngSwitchCase](../api/common/ngswitchcase)` matches the current `[ngSwitch](../api/common/ngswitch)` expression value. In this example, if `favoriteHero` is not set, the `[ngSwitch](../api/common/ngswitch)` value is `null` and `*[ngSwitchDefault](../api/common/ngswitchdefault)` displays, "Please enter your favorite hero." If `favoriteHero` is set, the application checks the movie hero by calling a component method. If that method returns `true`, the application selects `*[ngSwitchCase](../api/common/ngswitchcase)="true"` and displays: "Excellent choice." If that methods returns `false`, the application selects `*[ngSwitchCase](../api/common/ngswitchcase)="false"` and displays: "No movie, sorry." The (`*`) before `[ngSwitchCase](../api/common/ngswitchcase)` and `[ngSwitchDefault](../api/common/ngswitchdefault)` is required in this example. For more information, see [The NgSwitch directives](built-in-directives#switching-cases-with-ngswitch "Switching cases with NgSwitch - Built-in directives | Angular") section of the [Built-in directives](built-in-directives "Built-in directives | Angular") page. | Filters / pipes --------------- Angular **pipes** provide formatting and transformation for data in the template, like AngularJS **filters**. Many of the built-in filters in AngularJS have corresponding pipes in Angular. For more information on pipes, see [Pipes](pipes "Transforming Data Using Pipes | Angular"). ### `[currency](../api/common/currencypipe)` → `[currency](../api/common/currencypipe)` | AngularJS | Angular | | --- | --- | | ``` <td>   {{movie.price | currency}} </td> ``` Formats a number as currency. | ``` <td>{{movie.price | currency:'USD':true}}</td> ``` The Angular `[currency](../api/common/currencypipe)` pipe is similar although some of the parameters have changed. | ### `[date](../api/common/datepipe)` → `[date](../api/common/datepipe)` | AngularJS | Angular | | --- | --- | | ``` <td>   {{movie.releaseDate | date}} </td> ``` Formats a date to a string based on the requested format. | ``` <td>{{movie.releaseDate | date}}</td> ``` The Angular `[date](../api/common/datepipe)` pipe is similar. | ### `filter` → none | AngularJS | Angular | | --- | --- | | ``` <tr ng-repeat="movie in movieList | filter: {title:listFilter}"> ``` Selects a subset of items from the defined collection, based on the filter criteria. | For performance reasons, no comparable pipe exists in Angular. Do all your filtering in the component. If you need the same filtering code in several templates, consider building a custom pipe. | ### `json` → `json` | AngularJS | Angular | | --- | --- | | ``` <pre>   {{movie | json}} </pre> ``` Converts a JavaScript object into a JSON string. This is useful for debugging. | ``` <pre>{{movie | json}}</pre> ``` The Angular [`json`](../api/common/jsonpipe "JsonPipe | @angular/common - API | Angular") pipe does the same thing. | ### `limitTo` → `[slice](../api/common/slicepipe)` | AngularJS | Angular | | --- | --- | | ``` <tr ng-repeat="movie in movieList | limitTo:2:0"> ``` Selects up to the first parameter `2` number of items from the collection starting optionally at the beginning index `0`. | ``` <tr *ngFor="let movie of movies | slice:0:2"> ``` The `[SlicePipe](../api/common/slicepipe)` does the same thing but the *order of the parameters is reversed*, in keeping with the JavaScript `Slice` method. The first parameter is the starting index and the second is the limit. As in AngularJS, coding this operation within the component instead could improve performance. | ### `[lowercase](../api/common/lowercasepipe)` → `[lowercase](../api/common/lowercasepipe)` | AngularJS | Angular | | --- | --- | | ``` <td>   {{movie.title | lowercase}} </td> ``` Converts the string to lowercase. | ``` <td>{{movie.title | lowercase}}</td> ``` The Angular `[lowercase](../api/common/lowercasepipe)` pipe does the same thing. | ### `number` → `number` | AngularJS | Angular | | --- | --- | | ``` <td>   {{movie.starRating | number}} </td> ``` Formats a number as text. | ``` <td>{{movie.starRating | number}}</td> <td>{{movie.starRating | number:'1.1-2'}}</td> <td>{{movie.approvalRating | percent: '1.0-2'}}</td> ``` The Angular [`number`](../api/common/decimalpipe "DecimalPipe | @angular/common - API | Angular") pipe is similar. It provides more capabilities when defining the decimal places, as shown in the preceding second example. Angular also has a `[percent](../api/common/percentpipe)` pipe, which formats a number as a local percentage as shown in the third example. | ### `orderBy` → none | AngularJS | Angular | | --- | --- | | ``` <tr ng-repeat="movie in movieList | orderBy : 'title'"> ``` Displays the collection in the order specified by the expression. In this example, the movie title orders the `movieList`. | For performance reasons, no comparable pipe exists in Angular. Instead, use component code to order or sort results. If you need the same ordering or sorting code in several templates, consider building a custom pipe. | Modules / controllers / components ---------------------------------- In both AngularJS and Angular, modules help you organize your application into cohesive blocks of features. In AngularJS, you write the code that provides the model and the methods for the view in a **controller**. In Angular, you build a **component**. Because much AngularJS code is in JavaScript, JavaScript code is shown in the AngularJS column. The Angular code is shown using TypeScript. ### Immediately invoked function expression (IIFE) → none | AngularJS | Angular | | --- | --- | | ``` (   function () {     …   }() ); ``` In AngularJS, an IIFE around controller code keeps it out of the global namespace. | This is a nonissue in Angular because ES 2015 modules handle the namespace for you. For more information on modules, see the [Modules](architecture#modules "Modules - Introduction to Angular concepts | Angular") section of the [Architecture Overview](architecture "Introduction to Angular concepts | Angular"). | ### Angular modules → `NgModules` | AngularJS | Angular | | --- | --- | | ``` angular .module(   "movieHunter",   [     "ngRoute"   ] ); ``` In AngularJS, an Angular module keeps track of controllers, services, and other code. The second argument defines the list of other modules that this module depends upon. | ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` NgModules, defined with the `[NgModule](../api/core/ngmodule)` decorator, serve the same purpose: * `imports`: specifies the list of other modules that this module depends upon * `declaration`: keeps track of your components, pipes, and directives. For more information on modules, see [NgModules](ngmodules "NgModules | Angular"). | ### Controller registration → component decorator | AngularJS | Angular | | --- | --- | | ``` angular .module(   "movieHunter" ) .controller(   "MovieListCtrl",   [     "movieService",     MovieListCtrl   ] ); ``` AngularJS has code in each controller that looks up an appropriate Angular module and registers the controller with that module. The first argument is the controller name. The second argument defines the string names of all dependencies injected into this controller, and a reference to the controller function. | ``` @Component({ selector: 'app-movie-list', templateUrl: './movie-list.component.html', styleUrls: [ './movie-list.component.css' ], }) ``` Angular adds a decorator to the component class to provide any required metadata. The `@[Component](../api/core/component)` decorator declares that the class is a component and provides metadata about that component such as its selector, or tag, and its template. This is how you associate a template with logic, which is defined in the component class. For more information, see the [Components](architecture#components "Components - Introduction to Angular concepts | Angular") section of the [Architecture Overview](architecture "Introduction to Angular concepts | Angular") page. | ### Controller function → component class | AngularJS | Angular | | --- | --- | | ``` function MovieListCtrl(movieService) { } ``` In AngularJS, you write the code for the model and methods in a controller function. | ``` export class MovieListComponent { } ``` In Angular, you create a component class to contain the data model and control methods. Use the TypeScript `export` keyword to export the class so that the component can be imported into NgModules. For more information, see the [Components](architecture#components "Components - Introduction to Angular concepts | Angular") section of the [Architecture Overview](architecture "Introduction to Angular concepts | Angular") page. | ### Dependency injection → dependency injection | AngularJS | Angular | | --- | --- | | ``` MovieListCtrl.$inject = [   'MovieService' ]; function MovieListCtrl(movieService) { } ``` In AngularJS, you pass in any dependencies as controller function arguments. This example injects a `MovieService`. To guard against minification problems, tell Angular explicitly that it should inject an instance of the `MovieService` in the first parameter. | ``` constructor(movieService: MovieService) { } ``` In Angular, you pass in dependencies as arguments to the component class constructor. This example injects a `MovieService`. The TypeScript type of the first parameter tells Angular what to inject, even after minification. For more information, see the [Dependency injection](architecture#services-and-dependency-injection "Services and dependency injection - Introduction to Angular concepts | Angular") section of the [Architecture Overview](architecture "Introduction to Angular concepts | Angular"). | Style sheets ------------ Style sheets give your application a nice look. In AngularJS, you specify the style sheets for your entire application. As the application grows over time, the styles for the many parts of the application merge, which can cause unexpected results. In Angular, you can still define style sheets for your entire application. Now you can also encapsulate a style sheet within a specific component. ### `Link` tag → `styles` configuration or `styleUrls` | AngularJS | Angular | | --- | --- | | ``` <link href="styles.css"       rel="stylesheet" /> ``` AngularJS, uses a `link` tag in the head section of the `index.html` file to define the styles for the application. | ``` "styles": [ "styles.css" ], ``` With the Angular CLI, you can configure your global styles in the `angular.json` file. You can rename the extension to `.scss` to use sass. In Angular, you can use the `styles` or `styleUrls` property of the `@[Component](../api/core/component)` metadata to define a style sheet for a particular component. ``` styleUrls: [ './movie-list.component.css' ], ``` This allows you to set appropriate styles for individual components that do not leak into other parts of the application. | Last reviewed on Mon Feb 28 2022
programming_docs
angular Sharing modules Sharing modules =============== Creating shared modules allows you to organize and streamline your code. You can put commonly used directives, pipes, and components into one module and then import just that module wherever you need it in other parts of your application. Consider the following module from an imaginary app: ``` import { CommonModule } from '@angular/common'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { CustomerComponent } from './customer.component'; import { NewItemDirective } from './new-item.directive'; import { OrdersPipe } from './orders.pipe'; @NgModule({ imports: [ CommonModule ], declarations: [ CustomerComponent, NewItemDirective, OrdersPipe ], exports: [ CustomerComponent, NewItemDirective, OrdersPipe, CommonModule, FormsModule ] }) export class SharedModule { } ``` Notice the following: * It imports the `[CommonModule](../api/common/commonmodule)` because the module's component needs common directives * It declares and exports the utility pipe, directive, and component classes * It re-exports the `[CommonModule](../api/common/commonmodule)` and `[FormsModule](../api/forms/formsmodule)` By re-exporting `[CommonModule](../api/common/commonmodule)` and `[FormsModule](../api/forms/formsmodule)`, any other module that imports this `SharedModule`, gets access to directives like `[NgIf](../api/common/ngif)` and `[NgFor](../api/common/ngfor)` from `[CommonModule](../api/common/commonmodule)` and can bind to component properties with `[([ngModel](../api/forms/ngmodel))]`, a directive in the `[FormsModule](../api/forms/formsmodule)`. Even though the components declared by `SharedModule` might not bind with `[([ngModel](../api/forms/ngmodel))]` and there may be no need for `SharedModule` to import `[FormsModule](../api/forms/formsmodule)`, `SharedModule` can still export `[FormsModule](../api/forms/formsmodule)` without listing it among its `imports`. This way, you can give other modules access to `[FormsModule](../api/forms/formsmodule)` without having to import it directly into the `@[NgModule](../api/core/ngmodule)` decorator. More on NgModules ----------------- You may also be interested in the following: * [Providers](providers) * [Types of Feature Modules](module-types) Last reviewed on Mon Feb 28 2022 angular Slow computations Slow computations ================= On every change detection cycle, Angular synchronously: * Evaluates all template expressions in all components, unless specified otherwise, based on that each component's detection strategy * Executes the `ngDoCheck`, `ngAfterContentChecked`, `ngAfterViewChecked`, and `ngOnChanges` lifecycle hooks. A single slow computation within a template or a lifecycle hook can slow down the entire change detection process because Angular runs the computations sequentially. Identifying slow computations ----------------------------- You can identify heavy computations with Angular DevTools’ profiler. In the performance timeline, click a bar to preview a particular change detection cycle. This displays a bar chart, which shows how long the framework spent in change detection for each component. When you click a component, you can preview how long Angular spent evaluating its template and lifecycle hooks. For example, in the preceding screenshot, the second recorded change detection cycle is selected. Angular spent over 573 ms on this cycle, with the most time spent in the `EmployeeListComponent`. In the details panel, you can see that Angular spent over 297 ms evaluating the template of the `EmployeeListComponent`. Optimizing slow computations ---------------------------- Here are several techniques to remove slow computations: * **Optimizing the underlying algorithm**. This is the recommended approach. If you can speed up the algorithm that is causing the problem, you can speed up the entire change detection mechanism. * **Caching using pure pipes**. You can move the heavy computation to a pure [pipe](pipes). Angular reevaluates a pure pipe only if it detects that its inputs have changed, compared to the previous time Angular called it. * **Using memoization**. [Memoization](https://en.wikipedia.org/wiki/Memoization) is a similar technique to pure pipes, with the difference that pure pipes preserve only the last result from the computation where memoization could store multiple results. * **Avoid repaints/reflows in lifecycle hooks**. Certain [operations](https://web.dev/avoid-large-complex-layouts-and-layout-thrashing/) cause the browser to either synchronously recalculate the layout of the page or re-render it. Since reflows and repaints are generally slow, you want to avoid performing them in every change detection cycle. Pure pipes and memoization have different trade-offs. Pure pipes are an Angular built-in concept compared to memoization, which is a general software engineering practice for caching function results. The memory overhead of memoization could be significant if you invoke the heavy computation frequently with different arguments. Last reviewed on Wed May 04 2022 angular Service worker configuration Service worker configuration ============================ This topic describes the properties of the service worker configuration file. Prerequisites ------------- A basic understanding of the following: * [Service worker overview](https://developer.chrome.com/docs/workbox/service-worker-overview/) * [Service Worker in Production](service-worker-devops) The `ngsw-config.json` configuration file specifies which files and data URLs the Angular service worker should cache and how it should update the cached files and data. The [Angular CLI](cli) processes the configuration file during `ng build`. Manually, process it with the `ngsw-config` tool (where `<project-name>` is the name of the project being built): ``` ./node_modules/.bin/ngsw-config ./dist/<project-name> ./ngsw-config.json [/base/href] ``` The configuration file uses the JSON format. All file paths must begin with `/`, which corresponds to the deployment directory —usually `dist/<project-name>` in CLI projects. Unless otherwise commented, patterns use a **limited\*** glob format that internally will be converted into regex: | Glob formats | Details | | --- | --- | | `**` | Matches 0 or more path segments | | `*` | Matches 0 or more characters excluding `/` | | `?` | Matches exactly one character excluding `/` | | `!` prefix | Marks the pattern as being negative, meaning that only files that don't match the pattern are included | > **\*** Pay attention that some characters with a special meaning in a regular expression are not escaped and also the pattern is not wrapped in `^`/`$` in the internal glob to regex conversion. > > * `$` is a special character in regex that matches the end of the string and will not be automatically escaped when converting the glob pattern to a regular expression. If you want to literally match the `$` character, you have to escape it yourself (with `\\$`). > > > > For example, the glob pattern `/foo/bar/$value` results in an unmatchable expression, because it is impossible to have a string that has any characters after it has ended. > > > > > * The pattern will not be automatically wrapped in `^` and `$` when converting it to a regular expression. Therefore, the patterns will partially match the request URLs. If you want your patterns to match the beginning and/or end of URLs, you can add `^`/`$` yourself. > > > > For example, the glob pattern `/foo/bar/*.js` will match both `.js` and `.json` files. If you want to only match `.js` files, use `/foo/bar/*.js$`. > > > > > > Example patterns: | Patterns | Details | | --- | --- | | `/**/*.html` | Specifies all HTML files | | `/*.html` | Specifies only HTML files in the root | | `!/**/*.map` | Exclude all sourcemaps | Service worker configuration properties --------------------------------------- The following sections describe each property of the configuration file. ### `appData` This section enables you to pass any data you want that describes this particular version of the application. The `[SwUpdate](../api/service-worker/swupdate)` service includes that data in the update notifications. Many applications use this section to provide additional information for the display of UI popups, notifying users of the available update. ### `index` Specifies the file that serves as the index page to satisfy navigation requests. Usually this is `/index.html`. ### `assetGroups` *Assets* are resources that are part of the application version that update along with the application. They can include resources loaded from the page's origin as well as third-party resources loaded from CDNs and other external URLs. As not all such external URLs might be known at build time, URL patterns can be matched. > For the service worker to handle resources that are loaded from different origins, make sure that [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is correctly configured on each origin's server. > > This field contains an array of asset groups, each of which defines a set of asset resources and the policy by which they are cached. ``` { "assetGroups": [ { … }, { … } ] } ``` > When the ServiceWorker handles a request, it checks asset groups in the order in which they appear in `ngsw-config.json`. The first asset group that matches the requested resource handles the request. > > It is recommended that you put the more specific asset groups higher in the list. For example, an asset group that matches `/foo.js` should appear before one that matches `*.js`. > > Each asset group specifies both a group of resources and a policy that governs them. This policy determines when the resources are fetched and what happens when changes are detected. Asset groups follow the Typescript interface shown here: ``` interface AssetGroup { name: string; installMode?: 'prefetch' | 'lazy'; updateMode?: 'prefetch' | 'lazy'; resources: { files?: string[]; urls?: string[]; }; cacheQueryOptions?: { ignoreSearch?: boolean; }; } ``` Each `AssetGroup` is defined by the following asset group properties. #### `name` A `name` is mandatory. It identifies this particular group of assets between versions of the configuration. #### `installMode` The `installMode` determines how these resources are initially cached. The `installMode` can be either of two values: | Values | Details | | --- | --- | | `prefetch` | Tells the Angular service worker to fetch every single listed resource while it's caching the current version of the application. This is bandwidth-intensive but ensures resources are available whenever they're requested, even if the browser is currently offline. | | `lazy` | Does not cache any of the resources up front. Instead, the Angular service worker only caches resources for which it receives requests. This is an on-demand caching mode. Resources that are never requested are not cached. This is useful for things like images at different resolutions, so the service worker only caches the correct assets for the particular screen and orientation. | Defaults to `prefetch`. #### `updateMode` For resources already in the cache, the `updateMode` determines the caching behavior when a new version of the application is discovered. Any resources in the group that have changed since the previous version are updated in accordance with `updateMode`. | Values | Details | | --- | --- | | `prefetch` | Tells the service worker to download and cache the changed resources immediately. | | `lazy` | Tells the service worker to not cache those resources. Instead, it treats them as unrequested and waits until they're requested again before updating them. An `updateMode` of `lazy` is only valid if the `installMode` is also `lazy`. | Defaults to the value `installMode` is set to. #### `resources` This section describes the resources to cache, broken up into the following groups: | Resource groups | Details | | --- | --- | | `files` | Lists patterns that match files in the distribution directory. These can be single files or glob-like patterns that match a number of files. | | `urls` | Includes both URLs and URL patterns that are matched at runtime. These resources are not fetched directly and do not have content hashes, but they are cached according to their HTTP headers. This is most useful for CDNs such as the Google Fonts service. *(Negative glob patterns are not supported and `?` will be matched literally; that is, it will not match any character other than `?`.)* | #### `cacheQueryOptions` These options are used to modify the matching behavior of requests. They are passed to the browsers `Cache#match` function. See [MDN](https://developer.mozilla.org/docs/Web/API/Cache/match) for details. Currently, only the following options are supported: | Options | Details | | --- | --- | | `ignoreSearch` | Ignore query parameters. Defaults to `false`. | ### `dataGroups` Unlike asset resources, data requests are not versioned along with the application. They're cached according to manually-configured policies that are more useful for situations such as API requests and other data dependencies. This field contains an array of data groups, each of which defines a set of data resources and the policy by which they are cached. ``` { "dataGroups": [ { … }, { … } ] } ``` > When the ServiceWorker handles a request, it checks data groups in the order in which they appear in `ngsw-config.json`. The first data group that matches the requested resource handles the request. > > It is recommended that you put the more specific data groups higher in the list. For example, a data group that matches `/api/foo.json` should appear before one that matches `/api/*.json`. > > Data groups follow this Typescript interface: ``` export interface DataGroup { name: string; urls: string[]; version?: number; cacheConfig: { maxSize: number; maxAge: string; timeout?: string; strategy?: 'freshness' | 'performance'; }; cacheQueryOptions?: { ignoreSearch?: boolean; }; } ``` Each `DataGroup` is defined by the following data group properties. #### `name` Similar to `assetGroups`, every data group has a `name` which uniquely identifies it. #### `urls` A list of URL patterns. URLs that match these patterns are cached according to this data group's policy. Only non-mutating requests (GET and HEAD) are cached. * Negative glob patterns are not supported * `?` is matched literally; that is, it matches *only* the character `?` #### `version` Occasionally APIs change formats in a way that is not backward-compatible. A new version of the application might not be compatible with the old API format and thus might not be compatible with existing cached resources from that API. `version` provides a mechanism to indicate that the resources being cached have been updated in a backwards-incompatible way, and that the old cache entries —those from previous versions— should be discarded. `version` is an integer field and defaults to `1`. #### `cacheConfig` The following properties define the policy by which matching requests are cached. ##### `maxSize` **Required** The maximum number of entries, or responses, in the cache. Open-ended caches can grow in unbounded ways and eventually exceed storage quotas, calling for eviction. ##### `maxAge` **Required** The `maxAge` parameter indicates how long responses are allowed to remain in the cache before being considered invalid and evicted. `maxAge` is a duration string, using the following unit suffixes: | Suffixes | Details | | --- | --- | | `d` | Days | | `h` | Hours | | `m` | Minutes | | `s` | Seconds | | `u` | Milliseconds | For example, the string `3d12h` caches content for up to three and a half days. ##### `timeout` This duration string specifies the network timeout. The network timeout is how long the Angular service worker waits for the network to respond before using a cached response, if configured to do so. `timeout` is a duration string, using the following unit suffixes: | Suffixes | Details | | --- | --- | | `d` | Days | | `h` | Hours | | `m` | Minutes | | `s` | Seconds | | `u` | Milliseconds | For example, the string `5s30u` translates to five seconds and 30 milliseconds of network timeout. ##### `strategy` The Angular service worker can use either of two caching strategies for data resources. | Caching strategies | Details | | --- | --- | | `performance` | The default, optimizes for responses that are as fast as possible. If a resource exists in the cache, the cached version is used, and no network request is made. This allows for some staleness, depending on the `maxAge`, in exchange for better performance. This is suitable for resources that don't change often; for example, user avatar images. | | `freshness` | Optimizes for currency of data, preferentially fetching requested data from the network. Only if the network times out, according to `timeout`, does the request fall back to the cache. This is useful for resources that change frequently; for example, account balances. | > You can also emulate a third strategy, [staleWhileRevalidate](https://developers.google.com/web/fundamentals/instant-and-offline/offline-cookbook/#stale-while-revalidate), which returns cached data if it is available, but also fetches fresh data from the network in the background for next time. To use this strategy set `strategy` to `freshness` and `timeout` to `0u` in `cacheConfig`. > > This essentially does the following: > > 1. Try to fetch from the network first. > 2. If the network request does not complete immediately, that is after a timeout of 0 ms, ignore the cache age and fall back to the cached value. > 3. Once the network request completes, update the cache for future requests. > 4. If the resource does not exist in the cache, wait for the network request anyway. > > ##### `cacheOpaqueResponses` Whether the Angular service worker should cache opaque responses or not. If not specified, the default value depends on the data group's configured strategy: | Strategies | Details | | --- | --- | | Groups with the `freshness` strategy | The default value is `true` and the service worker caches opaque responses. These groups will request the data every time and only fall back to the cached response when offline or on a slow network. Therefore, it doesn't matter if the service worker caches an error response. | | Groups with the `performance` strategy | The default value is `false` and the service worker doesn't cache opaque responses. These groups would continue to return a cached response until `maxAge` expires, even if the error was due to a temporary network or server issue. Therefore, it would be problematic for the service worker to cache an error response. | In case you are not familiar, an [opaque response](https://fetch.spec.whatwg.org#concept-filtered-response-opaque) is a special type of response returned when requesting a resource that is on a different origin which doesn't return CORS headers. One of the characteristics of an opaque response is that the service worker is not allowed to read its status, meaning it can't check if the request was successful or not. See [Introduction to fetch()](https://developers.google.com/web/updates/2015/03/introduction-to-fetch#response_types) for more details. If you are not able to implement CORS —for example, if you don't control the origin— prefer using the `freshness` strategy for resources that result in opaque responses. #### `cacheQueryOptions` See [assetGroups](service-worker-config#assetgroups) for details. ### `navigationUrls` This optional section enables you to specify a custom list of URLs that will be redirected to the index file. #### Handling navigation requests The ServiceWorker redirects navigation requests that don't match any `asset` or `data` group to the specified [index file](service-worker-config#index-file). A request is considered to be a navigation request if: * Its [method](https://developer.mozilla.org/docs/Web/API/Request/method) is `GET` * Its [mode](https://developer.mozilla.org/docs/Web/API/Request/mode) is `navigation` * It accepts a `text/html` response as determined by the value of the `Accept` header * Its URL matches the following criteria: + The URL must not contain a file extension (that is, a `.`) in the last path segment + The URL must not contain `__` > To configure whether navigation requests are sent through to the network or not, see the [navigationRequestStrategy](service-worker-config#navigation-request-strategy) section. > > #### Matching navigation request URLs While these default criteria are fine in most cases, it is sometimes desirable to configure different rules. For example, you might want to ignore specific routes, such as those that are not part of the Angular app, and pass them through to the server. This field contains an array of URLs and [glob-like](service-worker-config#glob-patterns) URL patterns that are matched at runtime. It can contain both negative patterns (that is, patterns starting with `!`) and non-negative patterns and URLs. Only requests whose URLs match *any* of the non-negative URLs/patterns and *none* of the negative ones are considered navigation requests. The URL query is ignored when matching. If the field is omitted, it defaults to: ``` [ '/**', // Include all URLs. '!/**/*.*', // Exclude URLs to files. '!/**/*__*', // Exclude URLs containing `__` in the last segment. '!/**/*__*/**', // Exclude URLs containing `__` in any other segment. ] ``` ### `navigationRequestStrategy` This optional property enables you to configure how the service worker handles navigation requests: ``` { "navigationRequestStrategy": "freshness" } ``` | Possible values | Details | | --- | --- | | `'performance'` | The default setting. Serves the specified [index file](service-worker-config#index-file), which is typically cached. | | `'freshness'` | Passes the requests through to the network and falls back to the `performance` behavior when offline. This value is useful when the server redirects the navigation requests elsewhere using a `3xx` HTTP redirect status code. Reasons for using this value include: * Redirecting to an authentication website when authentication is not handled by the application * Redirecting specific URLs to avoid breaking existing links/bookmarks after a website redesign * Redirecting to a different website, such as a server-status page, while a page is temporarily down | > The `freshness` strategy usually results in more requests sent to the server, which can increase response latency. It is recommended that you use the default performance strategy whenever possible. > > Last reviewed on Mon Feb 28 2022
programming_docs
angular Getting started with NgOptimizedImage Getting started with NgOptimizedImage ===================================== The `[NgOptimizedImage](../api/common/ngoptimizedimage)` directive makes it easy to adopt performance best practices for loading images. The directive ensures that the loading of the [Largest Contentful Paint (LCP)](http://web.dev/lcp) image is prioritized by: * Automatically setting the `fetchpriority` attribute on the `<[img](../api/common/ngoptimizedimage)>` tag * Lazy loading other images by default * Asserting that there is a corresponding preconnect link tag in the document head * Automatically generating a `srcset` attribute * Generating a [preload hint](https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preload) if app is using SSR In addition to optimizing the loading of the LCP image, `[NgOptimizedImage](../api/common/ngoptimizedimage)` enforces a number of image best practices, such as: * Using [image CDN URLs to apply image optimizations](https://web.dev/image-cdns/#how-image-cdns-use-urls-to-indicate-optimization-options) * Preventing layout shift by requiring `width` and `height` * Warning if `width` or `height` have been set incorrectly * Warning if the image will be visually distorted when rendered Getting Started --------------- #### Step 1: Import NgOptimizedImage ``` import { NgOptimizedImage } from '@angular/common' ``` The directive is defined as a [standalone directive](standalone-components), so components should import it directly. #### Step 2: (Optional) Set up a Loader An image loader is not **required** in order to use NgOptimizedImage, but using one with an image CDN enables powerful performance features, including automatic `srcset`s for your images. A brief guide for setting up a loader can be found in the [Configuring an Image Loader](image-directive#configuring-an-image-loader-for-ngoptimizedimage) section at the end of this page. #### Step 3: Enable the directive To activate the `[NgOptimizedImage](../api/common/ngoptimizedimage)` directive, replace your image's `src` attribute with `[ngSrc](../api/common/ngoptimizedimage)`. ``` <img ngSrc="cat.jpg"> ``` If you're using a [built-in third-party loader](image-directive#built-in-loaders), make sure to omit the base URL path from `src`, as that will be prepended automatically by the loader. #### Step 4: Mark images as `priority` Always mark the [LCP image](https://web.dev/lcp/#what-elements-are-considered) on your page as `priority` to prioritize its loading. ``` <img ngSrc="cat.jpg" width="400" height="200" priority> ``` Marking an image as `priority` applies the following optimizations: * Sets `fetchpriority=high` (read more about priority hints [here](https://web.dev/priority-hints)) * Sets `loading=eager` (read more about native lazy loading [here](https://web.dev/browser-level-image-lazy-loading)) * Automatically generates a [preload link element](https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preload) if [rendering on the server](universal). Angular displays a warning during development if the LCP element is an image that does not have the `priority` attribute. A page’s LCP element can vary based on a number of factors - such as the dimensions of a user's screen, so a page may have multiple images that should be marked `priority`. See [CSS for Web Vitals](https://web.dev/css-web-vitals/#images-and-largest-contentful-paint-lcp) for more details. #### Step 5: Include Height and Width In order to prevent [image-related layout shifts](https://web.dev/css-web-vitals/#images-and-layout-shifts), NgOptimizedImage requires that you specify a height and width for your image, as follows: ``` <img ngSrc="cat.jpg" width="400" height="200"> ``` For **responsive images** (images which you've styled to grow and shrink relative to the viewport), the `width` and `height` attributes should be the instrinsic size of the image file. For **fixed size images**, the `width` and `height` attributes should reflect the desired rendered size of the image. The aspect ratio of these attributes should always match the intrinsic aspect ratio of the image. Note: If you don't know the size of your images, consider using "fill mode" to inherit the size of the parent container, as described below: ### Using `fill` mode In cases where you want to have an image fill a containing element, you can use the `fill` attribute. This is often useful when you want to achieve a "background image" behavior. It can also be helpful when you don't know the exact width and height of your image, but you do have a parent container with a known size that you'd like to fit your image into (see "object-fit" below). When you add the `fill` attribute to your image, you do not need and should not include a `width` and `height`, as in this example: ``` <img ngSrc="cat.jpg" fill> ``` You can use the [object-fit](https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit) CSS property to change how the image will fill its container. If you style your image with `object-fit: "contain"`, the image will maintain its aspect ratio and be "letterboxed" to fit the element. If you set `object-fit: "cover"`, the element will retain its aspect ratio, fully fill the element, and some content may be "cropped" off. See visual examples of the above at the [MDN object-fit documentation.](https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit) You can also style your image with the [object-position property](https://developer.mozilla.org/en-US/docs/Web/CSS/object-position) to adjust its position within its containing element. **Important note:** For the "fill" image to render properly, its parent element **must** be styled with `position: "relative"`, `position: "fixed"`, or `position: "absolute"`. ### Adjusting image styling Depending on the image's styling, adding `width` and `height` attributes may cause the image to render differently. `[NgOptimizedImage](../api/common/ngoptimizedimage)` warns you if your image styling renders the image at a distorted aspect ratio. You can typically fix this by adding `height: auto` or `width: auto` to your image styles. For more information, see the [web.dev article on the `<img>` tag](https://web.dev/patterns/web-vitals-patterns/images/img-tag). If the `height` and `width` attribute on the image are preventing you from sizing the image the way you want with CSS, consider using "fill" mode instead, and styling the image's parent element. Performance Features -------------------- NgOptimizedImage includes a number of features designed to improve loading performance in your app. These features are described in this section. ### Add resource hints You can add a [`preconnect` resource hint](https://web.dev/preconnect-and-dns-prefetch) for your image origin to ensure that the LCP image loads as quickly as possible. Always put resource hints in the `<head>` of the document. ``` <link rel="preconnect" href="https://my.cdn.origin" /> ``` By default, if you use a loader for a third-party image service, the `[NgOptimizedImage](../api/common/ngoptimizedimage)` directive will warn during development if it detects that there is no `preconnect` resource hint for the origin that serves the LCP image. To disable these warnings, inject the `[PRECONNECT\_CHECK\_BLOCKLIST](../api/common/preconnect_check_blocklist)` token: ``` providers: [ {provide: PRECONNECT_CHECK_BLOCKLIST, useValue: 'https://your-domain.com'} ], ``` ### Request images at the correct size with automatic `srcset` Defining a [`srcset` attribute](https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement/srcset) ensures that the browser requests an image at the right size for your user's viewport, so it doesn't waste time downloading an image that's too large. `[NgOptimizedImage](../api/common/ngoptimizedimage)` generates an appropriate `srcset` for the image, based on the presence and value of the [`sizes` attribute](https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement/sizes) on the image tag. #### Fixed-size images If your image should be "fixed" in size (i.e. the same size across devices, except for [pixel density](https://web.dev/codelab-density-descriptors/)), there is no need to set a `sizes` attribute. A `srcset` can be generated automatically from the image's width and height attributes with no further input required. Example srcset generated: `<[img](../api/common/ngoptimizedimage) ... srcset="image-400w.jpg 1x, image-800w.jpg 2x">` #### Responsive images If your image should be responsive (i.e. grow and shrink according to viewport size), then you will need to define a [`sizes` attribute](https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement/sizes) to generate the `srcset`. If you haven't used `sizes` before, a good place to start is to set it based on viewport width. For example, if your CSS causes the image to fill 100% of viewport width, set `sizes` to `100vw` and the browser will select the image in the `srcset` that is closest to the viewport width (after accounting for pixel density). If your image is only likely to take up half the screen (ex: in a sidebar), set `sizes` to `50vw` to ensure the browser selects a smaller image. And so on. If you find that the above does not cover your desired image behavior, see the documentation on [advanced sizes values](image-directive#advanced-sizes-values). By default, the responsive breakpoints are: `[16, 32, 48, 64, 96, 128, 256, 384, 640, 750, 828, 1080, 1200, 1920, 2048, 3840]` If you would like to customize these breakpoints, you can do so using the `[IMAGE\_CONFIG](../api/common/image_config)` provider: ``` providers: [ { provide: IMAGE_CONFIG, useValue: { breakpoints: [16, 48, 96, 128, 384, 640, 750, 828, 1080, 1200, 1920] } }, ], ``` If you would like to manually define a `srcset` attribute, you can provide your own using the `ngSrcset` attribute: ``` <img ngSrc="hero.jpg" ngSrcset="100w, 200w, 300w"> ``` If the `ngSrcset` attribute is present, `[NgOptimizedImage](../api/common/ngoptimizedimage)` generates and sets the `srcset` based on the sizes included. Do not include image file names in `ngSrcset` - the directive infers this information from `[ngSrc](../api/common/ngoptimizedimage)`. The directive supports both width descriptors (e.g. `100w`) and density descriptors (e.g. `1x`). ``` <img ngSrc="hero.jpg" ngSrcset="100w, 200w, 300w" sizes="50vw"> ``` ### Disabling automatic srcset generation To disable srcset generation for a single image, you can add the `disableOptimizedSrcset` attribute on the image: ``` <img ngSrc="about.jpg" disableOptimizedSrcset> ``` ### Disabling image lazy loading By default, `[NgOptimizedImage](../api/common/ngoptimizedimage)` sets `loading=lazy` for all images that are not marked `priority`. You can disable this behavior for non-priority images by setting the `loading` attribute. This attribute accepts values: `eager`, `auto`, and `lazy`. [See the documentation for the standard image `loading` attribute for details](https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement/loading#value). ``` <img ngSrc="cat.jpg" width="400" height="200" loading="eager"> ``` ### Advanced 'sizes' values You may want to have images displayed at varying widths on differently-sized screens. A common example of this pattern is a grid- or column-based layout that renders a single column on mobile devices, and two columns on larger devices. You can capture this behavior in the `sizes` attribute, using a "media query" syntax, such as the following: ``` <img ngSrc="cat.jpg" width="400" height="200" sizes="(max-width: 768px) 100vw, 50vw"> ``` The `sizes` attribute in the above example says "I expect this image to be 100 percent of the screen width on devices under 768px wide. Otherwise, I expect it to be 50 percent of the screen width. For additional information about the `sizes` attribute, see [web.dev](https://web.dev/learn/design/responsive-images/#sizes) or [mdn](https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageElement/sizes). Configuring an image loader for `[NgOptimizedImage](../api/common/ngoptimizedimage)` ------------------------------------------------------------------------------------ A "loader" is a function that generates an [image transformation URL](https://web.dev/image-cdns/#how-image-cdns-use-urls-to-indicate-optimization-options) for a given image file. When appropriate, `[NgOptimizedImage](../api/common/ngoptimizedimage)` sets the size, format, and image quality transformations for an image. `[NgOptimizedImage](../api/common/ngoptimizedimage)` provides both a generic loader that applies no transformations, as well as loaders for various third-party image services. It also supports writing your own custom loader. | Loader type | Behavior | | --- | --- | | Generic loader | The URL returned by the generic loader will always match the value of `src`. In other words, this loader applies no transformations. Sites that use Angular to serve images are the primary intended use case for this loader. | | Loaders for third-party image services | The URL returned by the loaders for third-party image services will follow API conventions used by that particular image service. | | Custom loaders | A custom loader's behavior is defined by its developer. You should use a custom loader if your image service isn't supported by the loaders that come preconfigured with `[NgOptimizedImage](../api/common/ngoptimizedimage)`. | Based on the image services commonly used with Angular applications, `[NgOptimizedImage](../api/common/ngoptimizedimage)` provides loaders preconfigured to work with the following image services: | Image Service | Angular API | Documentation | | --- | --- | --- | | Cloudflare Image Resizing | `[provideCloudflareLoader](../api/common/providecloudflareloader)` | [Documentation](https://developers.cloudflare.com/images/image-resizing/) | | Cloudinary | `[provideCloudinaryLoader](../api/common/providecloudinaryloader)` | [Documentation](https://cloudinary.com/documentation/resizing_and_cropping) | | ImageKit | `[provideImageKitLoader](../api/common/provideimagekitloader)` | [Documentation](https://docs.imagekit.io/) | | Imgix | `[provideImgixLoader](../api/common/provideimgixloader)` | [Documentation](https://docs.imgix.com/) | To use the **generic loader** no additional code changes are necessary. This is the default behavior. ### Built-in Loaders To use an existing loader for a **third-party image service**, add the provider factory for your chosen service to the `providers` array. In the example below, the Imgix loader is used: ``` providers: [ provideImgixLoader('https://my.base.url/'), ], ``` The base URL for your image assets should be passed to the provider factory as an argument. For most sites, this base URL should match one of the following patterns: * <https://yoursite.yourcdn.com> * <https://subdomain.yoursite.com> * <https://subdomain.yourcdn.com/yoursite> You can learn more about the base URL structure in the docs of a corresponding CDN provider. ### Custom Loaders To use a **custom loader**, provide your loader function as a value for the `[IMAGE\_LOADER](../api/common/image_loader)` DI token. In the example below, the custom loader function returns a URL starting with `https://example.com` that includes `src` and `width` as URL parameters. ``` providers: [ { provide: IMAGE_LOADER, useValue: (config: ImageLoaderConfig) => { return `https://example.com/images?src=${config.src}&width=${config.width}`; }, }, ], ``` A loader function for the `[NgOptimizedImage](../api/common/ngoptimizedimage)` directive takes an object with the `[ImageLoaderConfig](../api/common/imageloaderconfig)` type (from `@angular/common`) as its argument and returns the absolute URL of the image asset. The `[ImageLoaderConfig](../api/common/imageloaderconfig)` object contains the `src` and `width` properties. Note: a custom loader must support requesting images at various widths in order for `ngSrcset` to work properly. Last reviewed on Mon Nov 07 2022 angular Update a documentation pull request Update a documentation pull request =================================== This topic describes how to respond to test failures and feedback on your pull request. After you open a pull request, it is tested and reviewed. After it's approved, the changes are merged into `angular/angular` and they become part of the Angular documentation. While some pull requests are approved with no further action on your part, most pull requests receive feedback that requires you to make a change. Anatomy of a pull request ------------------------- After you open a pull request, the pull request page records the activity on the pull request as it is reviewed, updated, and approved. This is an example of the top of a pull request page followed by a description of the information it contains. Above the pull-request tabs is a summary of the pull request that includes: * The pull request title and index * The status of the pull request: open or closed * A description of the branch with the changes and the branch to update The tabs contain different aspects of the pull request. * **Conversation** All comments and changes to the pull request, system messages, and a summary of the automated tests and approvals. * **Commits** The log of the commits included in this pull request. * **Checks** The results of the checks run on the commit. This is different from the automated tests that are also run and summarized at the bottom of the **Conversation** tab. * **Files changed** The changes this request makes to the code. In this tab is where you find specific comments to the changes in your pull request. You can reply to those comments in this tab, as well. Respond to a comment -------------------- If your pull request receives comments from a reviewer, you can respond in several ways. * Reply to the feedback. For example, you can ask for more information or reply with an explanation. * Make the changes to the documentation that the reviewer recommends. If you update the working branch with the suggested changes, resolve the comment. * Make other changes to the documentation. After reviewing the feedback, you might see an even better improvement. Update the working branch with your improvement and explain why you chose that to your reviewers in a comment. Remember that pull requests that do not receive a response to a review comment are considered abandoned and closed. For more information about abandoned pull requests, see [What happens to abandoned pull requests](doc-pr-open#what-happens-to-abandoned-pull-requests). ### Update a file in the pull request Follow this procedure to change a file in the pull request or to add a new file to the pull request. 1. In your `git` workspace, in your working directory, checkout your working branch. 2. Update the documentation to respond to the feedback you received. The procedures used to [revise a documentation topic](doc-editing) are also used to update the documentation while there's an open pull request. 3. Test your update locally as described in [Testing a documentation update](doc-build-test). 4. After your updates have been tested, commit your changes and push the new commits to the working branch of your repo on your `origin` server. 5. After you update the working branch on your `origin` server, the fork of the `angular/angular` repo in your GitHub account, your pull request updates automatically. 6. After the pull request updates, the automated tests are restarted and the reviewers are notified. Repeat this process as needed to address the feedback you get from reviews of the pull request. Clean up the branch ------------------- If you added commits to address review feedback, you might be requested to clean up your working branch. If some of the commits you made address only review feedback from your reviewers, they can probably be squashed. Squashing commits combines the changes made in multiple commits into a single commit. #### To squash commits in your working branch Perform these steps from a command-line tool on your local computer. 1. In your [workspace](doc-prepare-to-edit#create-a-git-workspace-on-your-local-computer) directory, in your [working directory](doc-prepare-to-edit#doc-working-directory), checkout your working branch. 2. Run this command to view the commits in your working branch. ``` git log --pretty=format:"%h %as %an %Cblue%s %Cgreen%D" ``` 3. In the output of the previous `git log` command, find the entry that contains `upstream/main`. It should be near the top of the list. 1. **Confirm that entry also contains `origin/main` and `main`** If it doesn't, you must resync the clone on your local computer and your `personal/angular` repo with the `upstream` repo. After you resync the repos, [rebase the working branch](doc-pr-prep#rebase-your-working-branch) before you continue. 2. **Confirm that all commits for your update are after the entry that contains `upstream/main`** Remember that the log output is displayed with the most recent commit first. Your commits should all be on top of the entry that contains `upstream/main` in the log output. If you have commits that are listed after the entry that contains `upstream/main`, somehow your commits in the working branch got mixed up. You must fix the branch before you try to squash any commits. 4. Count the lines that are on top of the entry that contains `upstream/main`. For example, in this log output, the working branch name is `update-doc-contribution` and there are five commit entries that are on top of the entry that contains `upstream/main`. 5. Run this command to squash the commits that occurred after the entry that contains `upstream/main`. In your command, replace the `5` after `HEAD` with the number of commits on top of the entry that contains `upstream/main`. ``` git rebase -i HEAD~5 ``` 6. This command opens your default editor with entries for the commits that you selected in the `git rebase` command. 7. To squash the commits, edit the commands in the file that's presented in the editor. The commands in the editor are listed from oldest to newest, which is the opposite order from how they are listed by the `git log` command. The possible command options are listed in the editor below the commands. To squash the commits for your pull request, you only need: `pick` and `squash`. 8. Review the commands in the editor and change them to match your intention. The commands are processed from top to bottom, that is from oldest commit to the most recent.\ To merge all commits in this branch for this pull request, change the `pick` commands to `squash` for all commits except for the first one. This text shows how this looks for this example. ``` pick bb0ff71891 docs: update of documentation contrib. guide squash c040d76685 docs: more content for doc updates squash 472585c43f docs: fix links that were broken by renamed files squash 3e6f4c73ac docs: add more info about open PR squash 8e50fad064 docs: more pr docs ``` With this edit, `git rebase` picks the first commit and combines the later commits into the first one. The commit message of the commit with the `pick` command, is the commit message used for the resulting commit. Make sure that it in the correct format and starts with `docs:`. If you need to change the commit message, you can edit it in the editor. 9. After you update the commands, save and exit the editor. The `git rebase` commit processes the commands and updates the commit log in your workspace. In this example, the rebase command combined the five commits to create a single commit in your working branch. This is the commit log after the rebase command completes. 10. The `git rebase` command changes the commit log in your local computer so it is now different from the one in your online repo. To update your online repo, you must force your push of changes from your local computer using this command. ``` git push --force-with-lease ``` This action is also called a *force push* because it changes the commit log that was stored in your GitHub account. Normally, when you run `git push`, you add new commits to the online repo. When other people have forked a repo, a force push can have undesired effects for them. This force-push is to your forked repo, which should not be shared, so it should be OK. 11. After your force push updates the online repo, your pull request restarts the automated tests and notifies the reviewers of the update. Next steps ---------- Repeat these update steps as necessary to respond to all the feedback you receive. After you address all the feedback and your pull request has been approved, it is merged into `angular/angular`. The changes in your pull request should appear in the documentation shortly afterwards. After your pull request is merged into `angular/angular`, you can [clean up your workspace](doc-edit-finish). Last reviewed on Wed Oct 12 2022
programming_docs
angular Observables compared to other techniques Observables compared to other techniques ======================================== You can often use observables instead of promises to deliver values asynchronously. Similarly, observables can take the place of event handlers. Finally, because observables deliver multiple values, you can use them where you might otherwise build and operate on arrays. Observables behave somewhat differently from the alternative techniques in each of these situations, but offer some significant advantages. Here are detailed comparisons of the differences. Observables compared to promises -------------------------------- Observables are often compared to promises. Here are some key differences: * Observables are declarative; computation does not start until subscription. Promises execute immediately on creation. This makes observables useful for defining recipes that can be run whenever you need the result. * Observables provide many values. Promises provide one. This makes observables useful for getting multiple values over time. * Observables differentiate between chaining and subscription. Promises only have `.then()` clauses. This makes observables useful for creating complex transformation recipes to be used by other part of the system, without causing the work to be executed. * Observables `subscribe()` is responsible for handling errors. Promises push errors to the child promises. This makes observables useful for centralized and predictable error handling. ### Creation and subscription * Observables are not executed until a consumer subscribes. The `subscribe()` executes the defined behavior once, and it can be called again. Each subscription has its own computation. Resubscription causes recomputation of values. ``` // declare a publishing operation const observable = new Observable<number>(observer => { // Subscriber fn... }); // initiate execution observable.subscribe(value => { // observer handles notifications }); ``` * Promises execute immediately, and just once. The computation of the result is initiated when the promise is created. There is no way to restart work. All `then` clauses (subscriptions) share the same computation. ``` // initiate execution let promise = new Promise<number>(resolve => { // Executer fn... }); promise.then(value => { // handle result here }); ``` ### Chaining * Observables differentiate between transformation function such as a map and subscription. Only subscription activates the subscriber function to start computing the values. ``` observable.pipe(map(v => 2 * v)); ``` * Promises do not differentiate between the last `.then` clauses (equivalent to subscription) and intermediate `.then` clauses (equivalent to map). ``` promise.then(v => 2 * v); ``` ### Cancellation * Observable subscriptions are cancellable. Unsubscribing removes the listener from receiving further values, and notifies the subscriber function to cancel work. ``` const subscription = observable.subscribe(() => { // observer handles notifications }); subscription.unsubscribe(); ``` * Promises are not cancellable. ### Error handling * Observable execution errors are delivered to the subscriber's error handler, and the subscriber automatically unsubscribes from the observable. ``` observable.subscribe(() => { throw new Error('my error'); }); ``` * Promises push errors to the child promises. ``` promise.then(() => { throw new Error('my error'); }); ``` ### Cheat sheet The following code snippets illustrate how the same kind of operation is defined using observables and promises. | Operation | Observable | Promise | | --- | --- | --- | | Creation | ``` new Observable((observer) => {   observer.next(123); }); ``` | ``` new Promise((resolve, reject) => {   resolve(123); }); ``` | | Transform | ``` obs.pipe(map((value) => value * 2)); ``` | ``` promise.then((value) => value * 2); ``` | | Subscribe | ``` sub = obs.subscribe((value) => {   console.log(value) }); ``` | ``` promise.then((value) => {   console.log(value); }); ``` | | Unsubscribe | ``` sub.unsubscribe(); ``` | Implied by promise resolution. | Observables compared to events API ---------------------------------- Observables are very similar to event handlers that use the events API. Both techniques define notification handlers, and use them to process multiple values delivered over time. Subscribing to an observable is equivalent to adding an event listener. One significant difference is that you can configure an observable to transform an event before passing the event to the handler. Using observables to handle events and asynchronous operations can have the advantage of greater consistency in contexts such as HTTP requests. Here are some code samples that illustrate how the same kind of operation is defined using observables and the events API. | | Observable | Events API | | --- | --- | --- | | Creation & cancellation | ``` // Setup const clicks$ = fromEvent(buttonEl, 'click'); // Begin listening const subscription = clicks$   .subscribe(e => console.log('Clicked', e)) // Stop listening subscription.unsubscribe(); ``` | ``` function handler(e) {   console.log('Clicked', e); } // Setup & begin listening button.addEventListener('click', handler); // Stop listening button.removeEventListener('click', handler); ``` | | Subscription | ``` observable.subscribe(() => {   // notification handlers here }); ``` | ``` element.addEventListener(eventName, (event) => {   // notification handler here }); ``` | | Configuration | Listen for keystrokes, but provide a stream representing the value in the input. ``` fromEvent(inputEl, 'keydown').pipe(   map(e => e.target.value) ); ``` | Does not support configuration. ``` element.addEventListener(eventName, (event) => {   // Cannot change the passed Event into another   // value before it gets to the handler }); ``` | Observables compared to arrays ------------------------------ An observable produces values over time. An array is created as a static set of values. In a sense, observables are asynchronous where arrays are synchronous. In the following examples, `→` implies asynchronous value delivery. | Values | Observable | Array | | --- | --- | --- | | Given | ``` obs: →1→2→3→5→7 ``` ``` obsB: →'a'→'b'→'c' ``` | ``` arr: [1, 2, 3, 5, 7] ``` ``` arrB: ['a', 'b', 'c'] ``` | | `concat()` | ``` concat(obs, obsB) ``` ``` →1→2→3→5→7→'a'→'b'→'c' ``` | ``` arr.concat(arrB) ``` ``` [1,2,3,5,7,'a','b','c'] ``` | | `filter()` | ``` obs.pipe(filter((v) => v>3)) ``` ``` →5→7 ``` | ``` arr.filter((v) => v>3) ``` ``` [5, 7] ``` | | `find()` | ``` obs.pipe(find((v) => v>3)) ``` ``` →5 ``` | ``` arr.find((v) => v>3) ``` ``` 5 ``` | | `findIndex()` | ``` obs.pipe(findIndex((v) => v>3)) ``` ``` →3 ``` | ``` arr.findIndex((v) => v>3) ``` ``` 3 ``` | | `forEach()` | ``` obs.pipe(tap((v) => {   console.log(v); })) 1 2 3 5 7 ``` | ``` arr.forEach((v) => {   console.log(v); }) 1 2 3 5 7 ``` | | `map()` | ``` obs.pipe(map((v) => -v)) ``` ``` →-1→-2→-3→-5→-7 ``` | ``` arr.map((v) => -v) ``` ``` [-1, -2, -3, -5, -7] ``` | | `reduce()` | ``` obs.pipe(reduce((s,v)=> s+v, 0)) ``` ``` →18 ``` | ``` arr.reduce((s,v) => s+v, 0) ``` ``` 18 ``` | Last reviewed on Mon Feb 28 2022 angular Angular CLI builders Angular CLI builders ==================== A number of Angular CLI commands run a complex process on your code, such as linting, building, or testing. The commands use an internal tool called Architect to run *CLI builders*, which apply another tool to accomplish the wanted task. With Angular version 8, the CLI Builder API is stable and available to developers who want to customize the Angular CLI by adding or modifying commands. For example, you could supply a builder to perform an entirely new task, or to change which third-party tool is used by an existing command. This document explains how CLI builders integrate with the workspace configuration file, and shows how you can create your own builder. > Find the code from the examples used here in this [GitHub repository](https://github.com/mgechev/cli-builders-demo). > > CLI builders ------------ The internal Architect tool delegates work to handler functions called [*builders*](glossary#builder). A builder handler function receives two arguments; a set of input `options` (a JSON object), and a `context` (a `BuilderContext` object). The separation of concerns here is the same as with [schematics](glossary#schematic), which are used for other CLI commands that touch your code (such as `ng generate`). * The `options` object is provided by the CLI user, while the `context` object is provided by the CLI Builder API * In addition to the contextual information, the `context` object, which is an instance of the `BuilderContext`, also provides access to a scheduling method, `context.scheduleTarget()`. The scheduler executes the builder handler function with a given [target configuration](glossary#target). The builder handler function can be synchronous (return a value) or asynchronous (return a Promise), or it can watch and return multiple values (return an Observable). The return value or values must always be of type `BuilderOutput`. This object contains a Boolean `success` field and an optional `error` field that can contain an error message. Angular provides some builders that are used by the CLI for commands such as `ng build` and `ng test`. Default target configurations for these and other built-in CLI builders can be found (and customized) in the "architect" section of the [workspace configuration file](workspace-config), `angular.json`. Also, extend and customize Angular by creating your own builders, which you can run using the [`ng run` CLI command](cli/run). ### Builder project structure A builder resides in a "project" folder that is similar in structure to an Angular workspace, with global configuration files at the top level, and more specific configuration in a source folder with the code files that define the behavior. For example, your `myBuilder` folder could contain the following files. | Files | Purpose | | --- | --- | | `src/my-builder.ts` | Main source file for the builder definition. | | `src/my-builder.spec.ts` | Source file for tests. | | `src/schema.json` | Definition of builder input options. | | `builders.json` | Builders definition. | | `package.json` | Dependencies. See <https://docs.npmjs.com/files/package.json>. | | `tsconfig.json` | [TypeScript configuration](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html). | Publish the builder to `npm` (see [Publishing your Library](creating-libraries#publishing-your-library)). If you publish it as `@example/my-builder`, install it using the following command. ``` npm install @example/my-builder ``` Creating a builder ------------------ As an example, create a builder that copies a file. To create a builder, use the `createBuilder()` CLI Builder function, and return a `Promise<BuilderOutput>` object. ``` import { BuilderContext, BuilderOutput, createBuilder } from '@angular-devkit/architect'; import { JsonObject } from '@angular-devkit/core'; interface Options extends JsonObject { source: string; destination: string; } export default createBuilder(copyFileBuilder); async function copyFileBuilder( options: Options, context: BuilderContext, ): Promise<BuilderOutput> { } ``` Now let's add some logic to it. The following code retrieves the source and destination file paths from user options and copies the file from the source to the destination (using the [Promise version of the built-in NodeJS `copyFile()` function](https://nodejs.org/api/fs.html#fs_fspromises_copyfile_src_dest_mode)). If the copy operation fails, it returns an error with a message about the underlying problem. ``` import { BuilderContext, BuilderOutput, createBuilder } from '@angular-devkit/architect'; import { JsonObject } from '@angular-devkit/core'; import { promises as fs } from 'fs'; interface Options extends JsonObject { source: string; destination: string; } export default createBuilder(copyFileBuilder); async function copyFileBuilder( options: Options, context: BuilderContext, ): Promise<BuilderOutput> { try { await fs.copyFile(options.source, options.destination); } catch (err) { return { success: false, error: err.message, }; } return { success: true }; } ``` ### Handling output By default, `copyFile()` does not print anything to the process standard output or error. If an error occurs, it might be difficult to understand exactly what the builder was trying to do when the problem occurred. Add some additional context by logging additional information using the `Logger` API. This also lets the builder itself be executed in a separate process, even if the standard output and error are deactivated (as in an [Electron app](https://electronjs.org)). You can retrieve a `Logger` instance from the context. ``` import { BuilderContext, BuilderOutput, createBuilder } from '@angular-devkit/architect'; import { JsonObject } from '@angular-devkit/core'; import { promises as fs } from 'fs'; interface Options extends JsonObject { source: string; destination: string; } export default createBuilder(copyFileBuilder); async function copyFileBuilder( options: Options, context: BuilderContext, ): Promise<BuilderOutput> { try { await fs.copyFile(options.source, options.destination); } catch (err) { context.logger.error('Failed to copy file.'); return { success: false, error: err.message, }; } return { success: true }; } ``` ### Progress and status reporting The CLI Builder API includes progress and status reporting tools, which can provide hints for certain functions and interfaces. To report progress, use the `context.reportProgress()` method, which takes a current value, (optional) total, and status string as arguments. The total can be any number; for example, if you know how many files you have to process, the total could be the number of files, and current should be the number processed so far. The status string is unmodified unless you pass in a new string value. You can see an [example](https://github.com/angular/angular-cli/blob/ba21c855c0c8b778005df01d4851b5a2176edc6f/packages/angular_devkit/build_angular/src/tslint/index.ts#L107) of how the `tslint` builder reports progress. In our example, the copy operation either finishes or is still executing, so there's no need for a progress report, but you can report status so that a parent builder that called our builder would know what's going on. Use the `context.reportStatus()` method to generate a status string of any length. > **NOTE**: There's no guarantee that a long string will be shown entirely; it could be cut to fit the UI that displays it. > > Pass an empty string to remove the status. ``` import { BuilderContext, BuilderOutput, createBuilder } from '@angular-devkit/architect'; import { JsonObject } from '@angular-devkit/core'; import { promises as fs } from 'fs'; interface Options extends JsonObject { source: string; destination: string; } export default createBuilder(copyFileBuilder); async function copyFileBuilder( options: Options, context: BuilderContext, ): Promise<BuilderOutput> { context.reportStatus(`Copying ${options.source} to ${options.destination}.`); try { await fs.copyFile(options.source, options.destination); } catch (err) { context.logger.error('Failed to copy file.'); return { success: false, error: err.message, }; } context.reportStatus('Done.'); return { success: true }; } ``` Builder input ------------- You can invoke a builder indirectly through a CLI command, or directly with the Angular CLI `ng run` command. In either case, you must provide required inputs, but can let other inputs default to values that are pre-configured for a specific [*target*](glossary#target), provide a pre-defined, named override configuration, and provide further override option values on the command line. ### Input validation You define builder inputs in a JSON schema associated with that builder. The Architect tool collects the resolved input values into an `options` object, and validates their types against the schema before passing them to the builder function. (The Schematics library does the same kind of validation of user input.) For our example builder, you expect the `options` value to be a `JsonObject` with two keys: A `source` and a `destination`, each of which are a string. You can provide the following schema for type validation of these values. ``` { "$schema": "http://json-schema.org/schema", "type": "object", "properties": { "source": { "type": "string" }, "destination": { "type": "string" } } } ``` > This is a very simple example, but the use of a schema for validation can be very powerful. For more information, see the [JSON schemas website](http://json-schema.org). > > To link our builder implementation with its schema and name, you need to create a *builder definition* file, which you can point to in `package.json`. Create a file named `builders.json` that looks like this: ``` { "builders": { "copy": { "implementation": "./dist/my-builder.js", "schema": "./src/schema.json", "description": "Copies a file." } } } ``` In the `package.json` file, add a `builders` key that tells the Architect tool where to find our builder definition file. ``` { "name": "@example/copy-file", "version": "1.0.0", "description": "Builder for copying files", "builders": "builders.json", "dependencies": { "@angular-devkit/architect": "~0.1200.0", "@angular-devkit/core": "^12.0.0" } } ``` The official name of our builder is now `@example/copy-file:copy`. The first part of this is the package name (resolved using node resolution), and the second part is the builder name (resolved using the `builders.json` file). Using one of our `options` is very straightforward. You did this in the previous section when you accessed `options.source` and `options.destination`. ``` context.reportStatus(`Copying ${options.source} to ${options.destination}.`); try { await fs.copyFile(options.source, options.destination); } catch (err) { context.logger.error('Failed to copy file.'); return { success: false, error: err.message, }; } context.reportStatus('Done.'); return { success: true }; ``` ### Target configuration A builder must have a defined target that associates it with a specific input configuration and [project](glossary#project). Targets are defined in the `angular.json` [CLI configuration file](workspace-config). A target specifies the builder to use, its default options configuration, and named alternative configurations. The Architect tool uses the target definition to resolve input options for a given run. The `angular.json` file has a section for each project, and the "architect" section of each project configures targets for builders used by CLI commands such as 'build', 'test', and 'lint'. By default, for example, the `build` command runs the builder `@angular-devkit/build-angular:[browser](../api/animations/browser)` to perform the build task, and passes in default option values as specified for the `build` target in `angular.json`. ``` { "myApp": { … "architect": { "build": { "builder": "@angular-devkit/build-angular:browser", "options": { "outputPath": "dist/myApp", "index": "src/index.html", … }, "configurations": { "production": { "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.prod.ts" } ], "optimization": true, "outputHashing": "all", … } } }, … ``` The command passes the builder the set of default options specified in the "options" section. If you pass the `--configuration=production` flag, it uses the override values specified in the `production` alternative configuration. Specify further option overrides individually on the command line. You might also add more alternative configurations to the `build` target, to define other environments such as `stage` or `qa`. #### Target strings The generic `ng run` CLI command takes as its first argument a target string of the following form. ``` project:target[:configuration] ``` | | Details | | --- | --- | | project | The name of the Angular CLI project that the target is associated with. | | target | A named builder configuration from the `architect` section of the `angular.json` file. | | configuration | (optional) The name of a specific configuration override for the given target, as defined in the `angular.json` file. | If your builder calls another builder, it might need to read a passed target string. Parse this string into an object by using the `targetFromTargetString()` utility function from `@angular-devkit/architect`. Schedule and run ---------------- Architect runs builders asynchronously. To invoke a builder, you schedule a task to be run when all configuration resolution is complete. The builder function is not executed until the scheduler returns a `BuilderRun` control object. The CLI typically schedules tasks by calling the `context.scheduleTarget()` function, and then resolves input options using the target definition in the `angular.json` file. Architect resolves input options for a given target by taking the default options object, then overwriting values from the configuration used (if any), then further overwriting values from the overrides object passed to `context.scheduleTarget()`. For the Angular CLI, the overrides object is built from command line arguments. Architect validates the resulting options values against the schema of the builder. If inputs are valid, Architect creates the context and executes the builder. For more information see [Workspace Configuration](workspace-config). > You can also invoke a builder directly from another builder or test by calling `context.scheduleBuilder()`. You pass an `options` object directly to the method, and those option values are validated against the schema of the builder without further adjustment. > > Only the `context.scheduleTarget()` method resolves the configuration and overrides through the `angular.json` file. > > ### Default architect configuration Let's create a simple `angular.json` file that puts target configurations into context. You can publish the builder to npm (see [Publishing your Library](creating-libraries#publishing-your-library)), and install it using the following command: ``` npm install @example/copy-file ``` If you create a new project with `ng new builder-test`, the generated `angular.json` file looks something like this, with only default builder configurations. ``` { // … "projects": { // … "builder-test": { // … "architect": { // … "build": { "builder": "@angular-devkit/build-angular:browser", "options": { // … more options… "outputPath": "dist/builder-test", "index": "src/index.html", "main": "src/main.ts", "polyfills": "src/polyfills.ts", "tsConfig": "src/tsconfig.app.json" }, "configurations": { "production": { // … more options… "optimization": true, "aot": true, "buildOptimizer": true } } } } } } // … } ``` ### Adding a target Add a new target that will run our builder to copy a file. This target tells the builder to copy the `package.json` file. You need to update the `angular.json` file to add a target for this builder to the "architect" section of our new project. * We'll add a new target section to the "architect" object for our project * The target named "copy-package" uses our builder, which you published to `@example/copy-file`. (See [Publishing your Library](creating-libraries#publishing-your-library).) * The options object provides default values for the two inputs that you defined; `source`, which is the existing file you are copying, and `destination`, the path you want to copy to * The `configurations` key is optional, we'll leave it out for now ``` { "projects": { "builder-test": { "architect": { "copy-package": { "builder": "@example/copy-file:copy", "options": { "source": "package.json", "destination": "package-copy.json" } }, "build": { "builder": "@angular-devkit/build-angular:browser", "options": { "outputPath": "dist/builder-test", "index": "src/index.html", "main": "src/main.ts", "polyfills": "src/polyfills.ts", "tsConfig": "src/tsconfig.app.json" }, "configurations": { "production": { "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.prod.ts" } ], "optimization": true, "aot": true, "buildOptimizer": true } } } } } } } ``` ### Running the builder To run our builder with the new target's default configuration, use the following CLI command. ``` ng run builder-test:copy-package ``` This copies the `package.json` file to `package-copy.json`. Use command-line arguments to override the configured defaults. For example, to run with a different `destination` value, use the following CLI command. ``` ng run builder-test:copy-package --destination=package-other.json ``` This copies the file to `package-other.json` instead of `package-copy.json`. Because you did not override the *source* option, it will copy from the `package.json` file (the default value provided for the target). Testing a builder ----------------- Use integration testing for your builder, so that you can use the Architect scheduler to create a context, as in this [example](https://github.com/mgechev/cli-builders-demo). * In the builder source directory, you have created a new test file `my-builder.spec.ts`. The code creates new instances of `JsonSchemaRegistry` (for schema validation), `TestingArchitectHost` (an in-memory implementation of `ArchitectHost`), and `Architect`. * We've added a `builders.json` file next to the builder's `package.json` file, and modified the package file to point to it. Here's an example of a test that runs the copy file builder. The test uses the builder to copy the `package.json` file and validates that the copied file's contents are the same as the source. ``` import { Architect } from '@angular-devkit/architect'; import { TestingArchitectHost } from '@angular-devkit/architect/testing'; import { schema } from '@angular-devkit/core'; import { promises as fs } from 'fs'; describe('Copy File Builder', () => { let architect: Architect; let architectHost: TestingArchitectHost; beforeEach(async () => { const registry = new schema.CoreSchemaRegistry(); registry.addPostTransform(schema.transforms.addUndefinedDefaults); // TestingArchitectHost() takes workspace and current directories. // Since we don't use those, both are the same in this case. architectHost = new TestingArchitectHost(__dirname, __dirname); architect = new Architect(architectHost, registry); // This will either take a Node package name, or a path to the directory // for the package.json file. await architectHost.addBuilderFromPackage('..'); }); it('can copy files', async () => { // A "run" can have multiple outputs, and contains progress information. const run = await architect.scheduleBuilder('@example/copy-file:copy', { source: 'package.json', destination: 'package-copy.json', }); // The "result" member (of type BuilderOutput) is the next output. const output = await run.result; // Stop the builder from running. This stops Architect from keeping // the builder-associated states in memory, since builders keep waiting // to be scheduled. await run.stop(); // Expect that the copied file is the same as its source. const sourceContent = await fs.readFile('package.json', 'utf8'); const destinationContent = await fs.readFile('package-copy.json', 'utf8'); expect(destinationContent).toBe(sourceContent); }); }); ``` > When running this test in your repo, you need the [`ts-node`](https://github.com/TypeStrong/ts-node) package. You can avoid this by renaming `my-builder.spec.ts` to `my-builder.spec.js`. > > ### Watch mode Architect expects builders to run once (by default) and return. This behavior is not entirely compatible with a builder that watches for changes (like Webpack, for example). Architect can support watch mode, but there are some things to look out for. * To be used with watch mode, a builder handler function should return an Observable. Architect subscribes to the Observable until it completes and might reuse it if the builder is scheduled again with the same arguments. * The builder should always emit a `BuilderOutput` object after each execution. Once it's been executed, it can enter a watch mode, to be triggered by an external event. If an event triggers it to restart, the builder should execute the `context.reportRunning()` function to tell Architect that it is running again. This prevents Architect from stopping the builder if another run is scheduled. When your builder calls `BuilderRun.stop()` to exit watch mode, Architect unsubscribes from the builder's Observable and calls the builder's teardown logic to clean up. (This behavior also allows for long-running builds to be stopped and cleaned up.) In general, if your builder is watching an external event, you should separate your run into three phases. | Phases | Details | | --- | --- | | Running | For example, webpack compiles. This ends when webpack finishes and your builder emits a `BuilderOutput` object. | | Watching | Between two runs, watch an external event stream. For example, webpack watches the file system for any changes. This ends when webpack restarts building, and `context.reportRunning()` is called. This goes back to step 1. | | Completion | Either the task is fully completed (for example, webpack was supposed to run a number of times), or the builder run was stopped (using `BuilderRun.stop()`). Your teardown logic is executed, and Architect unsubscribes from your builder's Observable. | Summary ------- The CLI Builder API provides a new way of changing the behavior of the Angular CLI by using builders to execute custom logic. * Builders can be synchronous or asynchronous, execute once or watch for external events, and can schedule other builders or targets * Builders have option defaults specified in the `angular.json` configuration file, which can be overwritten by an alternate configuration for the target, and further overwritten by command line flags * We recommend that you use integration tests to test Architect builders. Use unit tests to validate the logic that the builder executes. * If your builder returns an Observable, it should clean up in the teardown logic of that Observable Last reviewed on Mon Feb 28 2022
programming_docs
angular Directive composition API Directive composition API ========================= Angular directives offer a great way to encapsulate reusable behaviors— directives can apply attributes, CSS classes, and event listeners to an element. The *directive composition API* lets you apply directives to a component's host element from *within* the component TypeScript class. Adding directives to a component -------------------------------- You apply directives to a component by adding a `hostDirectives` property to a component's decorator. We call such directives *host directives*. In this example, we apply the directive `MenuBehavior` to the host element of `AdminMenu`. This works similarly to applying the `MenuBehavior` to the `<admin-menu>` element in a template. ``` @Component({ selector: 'admin-menu', template: 'admin-menu.html', hostDirectives: [MenuBehavior], }) export class AdminMenu { } ``` When the framework renders a component, Angular also creates an instance of each host directive. The directives' host bindings apply to the component's host element. By default, host directive inputs and outputs are not exposed as part of the component's public API. See [Including inputs and outputs](directive-composition-api#including-inputs-and-outputs) below for more information. **Angular applies host directives statically at compile time.** You cannot dynamically add directives at runtime. **Directives used in `hostDirectives` must be `standalone: true`.** **Angular ignores the `selector` of directives applied in the `hostDirectives` property.** Including inputs and outputs ---------------------------- When you apply `hostDirectives` to your component, the inputs and outputs from the host directives are not included in your component's API by default. You can explicitly include inputs and outputs in your component's API by expanding the entry in `hostDirectives`: ``` @Component({ selector: 'admin-menu', template: 'admin-menu.html', hostDirectives: [{ directive: MenuBehavior, inputs: ['menuId'], outputs: ['menuClosed'], }], }) export class AdminMenu { } ``` By explicitly specifying the inputs and outputs, consumers of the component with `hostDirective` can bind them in a template: ``` <admin-menu menuId="top-menu" (menuClosed)="logMenuClosed()"> ``` Furthermore, you can alias inputs and outputs from `hostDirective` to customize the API of your component: ``` @Component({ selector: 'admin-menu', template: 'admin-menu.html', hostDirectives: [{ directive: MenuBehavior, inputs: ['menuId: id'], outputs: ['menuClosed: closed'], }], }) export class AdminMenu { } ``` ``` <admin-menu id="top-menu" (closed)="logMenuClosed()"> ``` Adding directives to another directive -------------------------------------- You can also add `hostDirectives` to other directives, in addition to components. This enables the transitive aggregation of multiple behaviors. In the following example, we define two directives, `Menu` and `Tooltip`. We then compose the behavior of these two directives in `MenuWithTooltip`. Finally, we apply `MenuWithTooltip` to `SpecializedMenuWithTooltip`. When `SpecializedMenuWithTooltip` is used in a template, it creates instances of all of `Menu` , `Tooltip`, and `MenuWithTooltip`. Each of these directives' host bindings apply to the host element of `SpecializedMenuWithTooltip`. ``` @Directive({...}) export class Menu { } @Directive({...}) export class Tooltip { } // MenuWithTooltip can compose behaviors from multiple other directives @Directive({ hostDirectives: [Tooltip, Menu], }) export class MenuWithTooltip { } // CustomWidget can apply the already-composed behaviors from MenuWithTooltip @Directive({ hostDirectives: [MenuWithTooltip], }) export class SpecializedMenuWithTooltip { } ``` Host directive semantics ------------------------ ### Directive execution order Host directives go through the same lifecycle as components and directives used directly in a template. However, host directives always execute their constructor, lifecycle hooks, and bindings *before* the component or directive on which they are applied. The following example shows minimal use of a host directive: ``` @Component({ selector: 'admin-menu', template: 'admin-menu.html', hostDirectives: [MenuBehavior], }) export class AdminMenu { } ``` The order of execution here is: 1. `MenuBehavior` instantiated 2. `AdminMenu` instantiated 3. `MenuBehavior` receives inputs (`ngOnInit`) 4. `AdminMenu` receives inputs (`ngOnInit`) 5. `MenuBehavior` applies host bindings 6. `AdminMenu` applies host bindings This order of operations means that components with `hostDirectives` can override any host bindings specified by a host directive. This order of operations extends to nested chains of host directives, as shown in the following example. ``` @Directive({...}) export class Tooltip { } @Directive({ hostDirectives: [Tooltip], }) export class CustomTooltip { } @Directive({ hostDirectives: [CustomTooltip], }) export class EvenMoreCustomTooltip { } ``` In the example above, the order of execution is: 1. `Tooltip` instantiated 2. `CustomTooltip` instantiated 3. `EvenMoreCustomTooltip` instantiated 4. `Tooltip` receives inputs (`ngOnInit`) 5. `CustomTooltip` receives inputs (`ngOnInit`) 6. `EvenMoreCustomTooltip` receives inputs (`ngOnInit`) 7. `Tooltip` applies host bindings 8. `CustomTooltip` applies host bindings 9. `EvenMoreCustomTooltip` applies host bindings ### Dependency injection A component or directive that specifies `hostDirectives` can inject the instances of those host directives and vice versa. When applying host directives to a component, both the component and host directives can define providers. If a component or directive with `hostDirectives` and those host directives both provide the same injection token, the providers defined by class with `hostDirectives` take precedence over providers defined by the host directives. ### Performance While the directive composition API offers a powerful tool for reusing common behaviors, excessive use of host directives can impact your application's memory use. If you create components or directives that use *many* host directives, you may inadvertently balloon the memory used by your application. The following example shows a component that applies several host directives. ``` @Component({ hostDirectives: [ DisabledState, RequiredState, ValidationState, ColorState, RippleBehavior, ], }) export class CustomCheckbox { } ``` This example declares a custom checkbox component that includes five host directives. This means that Angular will create six objects each time a `CustomCheckbox` renders— one for the component and one for each host directive. For a few checkboxes on a page, this won't pose any significant issues. However, if your page renders *hundreds* of checkboxes, such as in a table, then you could start to see an impact of the additional object allocations. Always be sure to profile your application to determine the right composition pattern for your use case. @reviewed 2022-12-11 angular Setting up the local environment and workspace Setting up the local environment and workspace ============================================== This guide explains how to set up your environment for Angular development using the [Angular CLI tool](cli "CLI command reference"). It includes information about prerequisites, installing the CLI, creating an initial workspace and starter app, and running that app locally to verify your setup. If you are new to Angular, you might want to start with [Try it now!](start), which introduces the essentials of Angular in the context of a ready-made basic online store app for you to examine and modify. This standalone tutorial takes advantage of the interactive [StackBlitz](https://stackblitz.com) environment for online development. You don't need to set up your local environment until you're ready. Prerequisites ------------- To use the Angular framework, you should be familiar with the following: * [JavaScript](https://developer.mozilla.org/docs/Web/JavaScript/A_re-introduction_to_JavaScript) * [HTML](https://developer.mozilla.org/docs/Learn/HTML/Introduction_to_HTML) * [CSS](https://developer.mozilla.org/docs/Learn/CSS/First_steps) Knowledge of [TypeScript](https://www.typescriptlang.org) is helpful, but not required. To install Angular on your local system, you need the following: | Requirements | Details | | --- | --- | | Node.js | Angular requires an [active LTS or maintenance LTS](https://nodejs.org/about/releases) version of Node.js. For information about specific version requirements, see the `engines` key in the [package.json](https://unpkg.com/browse/@angular/core/package.json) file. For more information on installing Node.js, see [nodejs.org](https://nodejs.org "Nodejs.org"). If you are unsure what version of Node.js runs on your system, run `node -v` in a terminal window. | | npm package manager | Angular, the Angular CLI, and Angular applications depend on [npm packages](https://docs.npmjs.com/getting-started/what-is-npm) for many features and functions. To download and install npm packages, you need an npm package manager. This guide uses the [npm client](https://docs.npmjs.com/cli/install) command line interface, which is installed with `Node.js` by default. To check that you have the npm client installed, run `npm -v` in a terminal window. | Install the Angular CLI ----------------------- You use the Angular CLI to create projects, generate application and library code, and perform a variety of ongoing development tasks such as testing, bundling, and deployment. To install the Angular CLI, open a terminal window and run the following command: ``` npm install -g @angular/cli ``` > On Windows client computers, the execution of PowerShell scripts is disabled by default. To allow the execution of PowerShell scripts, which is needed for npm global binaries, you must set the following [execution policy](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies): > > > ``` > Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned > ``` > Carefully read the message displayed after executing the command and follow the instructions. Make sure you understand the implications of setting an execution policy. > > Create a workspace and initial application ------------------------------------------ You develop apps in the context of an Angular [**workspace**](glossary#workspace). To create a new workspace and initial starter app: 1. Run the CLI command `ng new` and provide the name `my-app`, as shown here: ``` ng new my-app ``` 2. The `ng new` command prompts you for information about features to include in the initial app. Accept the defaults by pressing the Enter or Return key. The Angular CLI installs the necessary Angular npm packages and other dependencies. This can take a few minutes. The CLI creates a new workspace and a simple Welcome app, ready to run. Run the application ------------------- The Angular CLI includes a server, for you to build and serve your app locally. 1. Navigate to the workspace folder, such as `my-app`. 2. Run the following command: ``` cd my-app ng serve --open ``` The `ng serve` command launches the server, watches your files, and rebuilds the app as you make changes to those files. The `--open` (or just `-o`) option automatically opens your browser to `http://localhost:4200/`. If your installation and setup was successful, you should see a page similar to the following. Next steps ---------- * For a more thorough introduction to the fundamental concepts and terminology of Angular single-page app architecture and design principles, read the [Angular Concepts](architecture) section. * Work through the [Tour of Heroes Tutorial](../tutorial/tour-of-heroes), a complete hands-on exercise that introduces you to the app development process using the Angular CLI and walks through important subsystems. * To learn more about using the Angular CLI, see the [CLI Overview](cli "CLI Overview"). In addition to creating the initial workspace and app scaffolding, use the CLI to generate Angular code such as components and services. The CLI supports the full development cycle, including building, testing, bundling, and deployment. * For more information about the Angular files generated by `ng new`, see [Workspace and Project File Structure](file-structure). Last reviewed on Mon Feb 28 2022 angular Dependency injection in Angular Dependency injection in Angular =============================== When you develop a smaller part of your system, like a module or a class, you may need to use features from other classes. For example, you may need an HTTP service to make backend calls. Dependency Injection, or DI, is a design pattern and mechanism for creating and delivering some parts of an application to other parts of an application that require them. Angular supports this design pattern and you can use it in your applications to increase flexibility and modularity. In Angular, dependencies are typically services, but they also can be values, such as strings or functions. An injector for an application (created automatically during bootstrap) instantiates dependencies when needed, using a configured provider of the service or value. > See the for a working example containing the code snippets in this guide. > > Prerequisites ------------- You should be familiar with the Angular apps in general, and have the fundamental knowledge of Components, Directives, and NgModules. It's highly recommended that you complete the following tutorial: [Tour of Heroes application and tutorial](../tutorial/tour-of-heroes) Learn about Angular dependency injection ---------------------------------------- [Understanding dependency injection Learn basic principles of dependency injection in Angular. Understanding dependency injection](dependency-injection "Understanding dependency injection") [Creating and injecting service Describes how to create a service and inject it in other services and components. Creating an injectable service](creating-injectable-service "Creating and injecting service") [Configuring dependency providers Describes how to configure dependencies using the providers field on the @Component and @NgModule decorators. Also describes how to use InjectionToken to provide and inject values in DI, which can be helpful when you want to use a value other than classes as dependencies. Configuring dependency providers](dependency-injection-providers "Configuring dependency providers") [Hierarchical injectors Hierarchical DI enables you to share dependencies between different parts of the application only when and if you need to. This is an advanced topic. Hierarchical injectors](hierarchical-dependency-injection "Hierarchical injectors") Last reviewed on Tue Aug 02 2022 angular Using a pipe in a template Using a pipe in a template ========================== To apply a pipe, use the pipe operator (`|`) within a template expression as shown in the following code example, along with the *name* of the pipe, which is `[date](../api/common/datepipe)` for the built-in [`DatePipe`](../api/common/datepipe). The tabs in the example show the following: * `app.component.html` uses `[date](../api/common/datepipe)` in a separate template to display a birthday. * `hero-birthday1.component.ts` uses the same pipe as part of an in-line template in a component that also sets the birthday value. ``` <p>The hero's birthday is {{ birthday | date }}</p> ``` ``` import { Component } from '@angular/core'; @Component({ selector: 'app-hero-birthday', template: "<p>The hero's birthday is {{ birthday | date }}</p>" }) export class HeroBirthdayComponent { birthday = new Date(1988, 3, 15); // April 15, 1988 -- since month parameter is zero-based } ``` The component's `birthday` value flows through the pipe operator, `|` to the [`date`](../api/common/datepipe) function. Last reviewed on Thu Apr 07 2022 angular Using Angular routes in a single-page application Using Angular routes in a single-page application ================================================= This tutorial describes how to build a single-page application, SPA that uses multiple Angular routes. In a Single Page Application (SPA), all of your application's functions exist in a single HTML page. As users access your application's features, the browser needs to render only the parts that matter to the user, instead of loading a new page. This pattern can significantly improve your application's user experience. To define how users navigate through your application, you use routes. Add routes to define how users navigate from one part of your application to another. You can also configure routes to guard against unexpected or unauthorized behavior. To explore a sample application featuring the contents of this tutorial, see the live example. Objectives ---------- * Organize a sample application's features into modules. * Define how to navigate to a component. * Pass information to a component using a parameter. * Structure routes by nesting several routes. * Check whether users can access a route. * Control whether the application can discard unsaved changes. * Improve performance by pre-fetching route data and lazy loading feature modules. * Require specific criteria to load components. Prerequisites ------------- To complete this tutorial, you should have a basic understanding of the following concepts: * JavaScript * HTML * CSS * [Angular CLI](cli) You might find the [Tour of Heroes tutorial](../tutorial/tour-of-heroes) helpful, but it is not required. Create a sample application --------------------------- Using the Angular CLI, create a new application, *angular-router-sample*. This application will have two components: *crisis-list* and *heroes-list*. 1. Create a new Angular project, *angular-router-sample*. ``` ng new angular-router-sample ``` When prompted with `Would you like to add Angular routing?`, select `N`. When prompted with `Which stylesheet format would you like to use?`, select `CSS`. After a few moments, a new project, `angular-router-sample`, is ready. 2. From your terminal, navigate to the `angular-router-sample` directory. 3. Create a component, *crisis-list*. ``` ng generate component crisis-list ``` 4. In your code editor, locate the file, `crisis-list.component.html` and replace the placeholder content with the following HTML. ``` <h3>CRISIS CENTER</h3> <p>Get your crisis here</p> ``` 5. Create a second component, *heroes-list*. ``` ng generate component heroes-list ``` 6. In your code editor, locate the file, `heroes-list.component.html` and replace the placeholder content with the following HTML. ``` <h3>HEROES</h3> <p>Get your heroes here</p> ``` 7. In your code editor, open the file, `app.component.html` and replace its contents with the following HTML. ``` <h1>Angular Router Sample</h1> <app-crisis-list></app-crisis-list> <app-heroes-list></app-heroes-list> ``` 8. Verify that your new application runs as expected by running the `ng serve` command. ``` ng serve ``` 9. Open a browser to `http://localhost:4200`. You should see a single web page, consisting of a title and the HTML of your two components. Import `[RouterModule](../api/router/routermodule)` from `@angular/router` -------------------------------------------------------------------------- Routing lets you display specific views of your application depending on the URL path. To add this functionality to your sample application, you need to update the `app.module.ts` file to use the module, `[RouterModule](../api/router/routermodule)`. You import this module from `@angular/router`. 1. From your code editor, open the `app.module.ts` file. 2. Add the following `import` statement. ``` import { RouterModule } from '@angular/router'; ``` Define your routes ------------------ In this section, you'll define two routes: * The route `/crisis-center` opens the `crisis-center` component. * The route `/heroes-list` opens the `heroes-list` component. A route definition is a JavaScript object. Each route typically has two properties. The first property, `path`, is a string that specifies the URL path for the route. The second property, `component`, is a string that specifies what component your application should display for that path. 1. From your code editor, open the `app.module.ts` file. 2. Locate the `@[NgModule](../api/core/ngmodule)()` section. 3. Replace the `imports` array in that section with the following. ``` imports: [ BrowserModule, RouterModule.forRoot([ {path: 'crisis-list', component: CrisisListComponent}, {path: 'heroes-list', component: HeroesListComponent}, ]), ], ``` This code adds the `[RouterModule](../api/router/routermodule)` to the `imports` array. Next, the code uses the `forRoot()` method of the `[RouterModule](../api/router/routermodule)` to define your two routes. This method takes an array of JavaScript objects, with each object defining the properties of a route. The `forRoot()` method ensures that your application only instantiates one `[RouterModule](../api/router/routermodule)`. For more information, see [Singleton Services](singleton-services#forroot-and-the-router). Update your component with `[router-outlet](../api/router/routeroutlet)` ------------------------------------------------------------------------ At this point, you have defined two routes for your application. However, your application still has both the `crisis-list` and `heroes-list` components hard-coded in your `app.component.html` template. For your routes to work, you need to update your template to dynamically load a component based on the URL path. To implement this functionality, you add the `[router-outlet](../api/router/routeroutlet)` directive to your template file. 1. From your code editor, open the `app.component.html` file. 2. Delete the following lines. ``` <app-crisis-list></app-crisis-list> <app-heroes-list></app-heroes-list> ``` 3. Add the `[router-outlet](../api/router/routeroutlet)` directive. ``` <router-outlet></router-outlet> ``` View your updated application in your browser. You should see only the application title. To view the `crisis-list` component, add `crisis-list` to the end of the path in your browser's address bar. For example: ``` http://localhost:4200/crisis-list ``` Notice that the `crisis-list` component displays. Angular is using the route you defined to dynamically load the component. You can load the `heroes-list` component the same way: ``` http://localhost:4200/heroes-list ``` Control navigation with UI elements ----------------------------------- Currently, your application supports two routes. However, the only way to use those routes is for the user to manually type the path in the browser's address bar. In this section, you'll add two links that users can click to navigate between the `heroes-list` and `crisis-list` components. You'll also add some CSS styles. While these styles are not required, they make it easier to identify the link for the currently-displayed component. You'll add that functionality in the next section. 1. Open the `app.component.html` file and add the following HTML below the title. ``` <nav> <a class="button" routerLink="/crisis-list">Crisis Center</a> | <a class="button" routerLink="/heroes-list">Heroes</a> </nav> ``` This HTML uses an Angular directive, `[routerLink](../api/router/routerlink)`. This directive connects the routes you defined to your template files. 2. Open the `app.component.css` file and add the following styles. ``` .button { box-shadow: inset 0 1px 0 0 #ffffff; background: #ffffff linear-gradient(to bottom, #ffffff 5%, #f6f6f6 100%); border-radius: 6px; border: 1px solid #dcdcdc; display: inline-block; cursor: pointer; color: #666666; font-family: Arial, sans-serif; font-size: 15px; font-weight: bold; padding: 6px 24px; text-decoration: none; text-shadow: 0 1px 0 #ffffff; outline: 0; } .activebutton { box-shadow: inset 0 1px 0 0 #dcecfb; background: #bddbfa linear-gradient(to bottom, #bddbfa 5%, #80b5ea 100%); border: 1px solid #84bbf3; color: #ffffff; text-shadow: 0 1px 0 #528ecc; } ``` If you view your application in the browser, you should see these two links. When you click on a link, the corresponding component appears. Identify the active route ------------------------- While users can navigate your application using the links you added in the previous section, they don't have a straightforward way to identify what the active route is. Add this functionality using Angular's `[routerLinkActive](../api/router/routerlinkactive)` directive. 1. From your code editor, open the `app.component.html` file. 2. Update the anchor tags to include the `[routerLinkActive](../api/router/routerlinkactive)` directive. ``` <nav> <a class="button" routerLink="/crisis-list" routerLinkActive="activebutton" ariaCurrentWhenActive="page"> Crisis Center </a> | <a class="button" routerLink="/heroes-list" routerLinkActive="activebutton" ariaCurrentWhenActive="page"> Heroes </a> </nav> ``` View your application again. As you click one of the buttons, the style for that button updates automatically, identifying the active component to the user. By adding the `[routerLinkActive](../api/router/routerlinkactive)` directive, you inform your application to apply a specific CSS class to the active route. In this tutorial, that CSS class is `activebutton`, but you could use any class that you want. Note that we are also specifying a value for the `[routerLinkActive](../api/router/routerlinkactive)`'s `ariaCurrentWhenActive`. This makes sure that visually impaired users (which may not perceive the different styling being applied) can also identify the active button. For more information see the Accessibility Best Practices [Active links identification section](accessibility#active-links-identification). Adding a redirect ----------------- In this step of the tutorial, you add a route that redirects the user to display the `/heroes-list` component. 1. From your code editor, open the `app.module.ts` file. 2. In the `imports` array, update the `[RouterModule](../api/router/routermodule)` section as follows. ``` imports: [ BrowserModule, RouterModule.forRoot([ {path: 'crisis-list', component: CrisisListComponent}, {path: 'heroes-list', component: HeroesListComponent}, {path: '', redirectTo: '/heroes-list', pathMatch: 'full'}, ]), ], ``` Notice that this new route uses an empty string as its path. In addition, it replaces the `component` property with two new ones: | Properties | Details | | --- | --- | | `redirectTo` | This property instructs Angular to redirect from an empty path to the `heroes-list` path. | | `pathMatch` | This property instructs Angular on how much of the URL to match. For this tutorial, you should set this property to `full`. This strategy is recommended when you have an empty string for a path. For more information about this property, see the [Route API documentation](../api/router/route). | Now when you open your application, it displays the `heroes-list` component by default. Adding a 404 page ----------------- It is possible for a user to try to access a route that you have not defined. To account for this behavior, the best practice is to display a 404 page. In this section, you'll create a 404 page and update your route configuration to show that page for any unspecified routes. 1. From the terminal, create a new component, `PageNotFound`. ``` ng generate component page-not-found ``` 2. From your code editor, open the `page-not-found.component.html` file and replace its contents with the following HTML. ``` <h2>Page Not Found</h2> <p>We couldn't find that page! Not even with x-ray vision.</p> ``` 3. Open the `app.module.ts` file. In the `imports` array, update the `[RouterModule](../api/router/routermodule)` section as follows. ``` imports: [ BrowserModule, RouterModule.forRoot([ {path: 'crisis-list', component: CrisisListComponent}, {path: 'heroes-list', component: HeroesListComponent}, {path: '', redirectTo: '/heroes-list', pathMatch: 'full'}, {path: '**', component: PageNotFoundComponent} ]), ], ``` The new route uses a path, `**`. This path is how Angular identifies a wildcard route. Any route that does not match an existing route in your configuration will use this route. > Notice that the wildcard route is placed at the end of the array. The order of your routes is important, as Angular applies routes in order and uses the first match it finds. > > Try navigating to a non-existing route on your application, such as `http://localhost:4200/powers`. This route doesn't match anything defined in your `app.module.ts` file. However, because you defined a wildcard route, the application automatically displays your `PageNotFound` component. Next steps ---------- At this point, you have a basic application that uses Angular's routing feature to change what components the user can see based on the URL address. You have extended these features to include a redirect, as well as a wildcard route to display a custom 404 page. For more information about routing, see the following topics: * [In-app Routing and Navigation](router) * [Router API](../api/router) Last reviewed on Mon Feb 28 2022
programming_docs
angular Testing Pipes Testing Pipes ============= You can test <pipes> without the Angular testing utilities. > If you'd like to experiment with the application that this guide describes, run it in your browser or download and run it locally. > > Testing the `[TitleCasePipe](../api/common/titlecasepipe)` ---------------------------------------------------------- A pipe class has one method, `transform`, that manipulates the input value into a transformed output value. The `transform` implementation rarely interacts with the DOM. Most pipes have no dependence on Angular other than the `@[Pipe](../api/core/pipe)` metadata and an interface. Consider a `[TitleCasePipe](../api/common/titlecasepipe)` that capitalizes the first letter of each word. Here's an implementation with a regular expression. ``` import { Pipe, PipeTransform } from '@angular/core'; @Pipe({name: 'titlecase', pure: true}) /** Transform to Title Case: uppercase the first letter of the words in a string. */ export class TitleCasePipe implements PipeTransform { transform(input: string): string { return input.length === 0 ? '' : input.replace(/\w\S*/g, (txt => txt[0].toUpperCase() + txt.slice(1).toLowerCase() )); } } ``` Anything that uses a regular expression is worth testing thoroughly. Use simple Jasmine to explore the expected cases and the edge cases. ``` describe('TitleCasePipe', () => { // This pipe is a pure, stateless function so no need for BeforeEach const pipe = new TitleCasePipe(); it('transforms "abc" to "Abc"', () => { expect(pipe.transform('abc')).toBe('Abc'); }); it('transforms "abc def" to "Abc Def"', () => { expect(pipe.transform('abc def')).toBe('Abc Def'); }); // ... more tests ... }); ``` Writing DOM tests to support a pipe test ---------------------------------------- These are tests of the pipe *in isolation*. They can't tell if the `[TitleCasePipe](../api/common/titlecasepipe)` is working properly as applied in the application components. Consider adding component tests such as this one: ``` it('should convert hero name to Title Case', () => { // get the name's input and display elements from the DOM const hostElement: HTMLElement = fixture.nativeElement; const nameInput: HTMLInputElement = hostElement.querySelector('input')!; const nameDisplay: HTMLElement = hostElement.querySelector('span')!; // simulate user entering a new name into the input box nameInput.value = 'quick BROWN fOx'; // Dispatch a DOM event so that Angular learns of input value change. nameInput.dispatchEvent(new Event('input')); // Tell Angular to update the display binding through the title pipe fixture.detectChanges(); expect(nameDisplay.textContent).toBe('Quick Brown Fox'); }); ``` Last reviewed on Mon Feb 28 2022 angular Understanding template variables Understanding template variables ================================ Template variables help you use data from one part of a template in another part of the template. Use template variables to perform tasks such as respond to user input or finely tune your application's forms. A template variable can refer to the following: * a DOM element within a template * a directive or component * a [TemplateRef](../api/core/templateref) from an [ng-template](../api/core/ng-template) * a [web component](https://developer.mozilla.org/en-US/docs/Web/Web_Components "MDN: Web Components") > See the live example for a working example containing the code snippets in this guide. > > Prerequisites ------------- * [Understanding templates](template-overview) Syntax ------ In the template, you use the hash symbol, `#`, to declare a template variable. The following template variable, `#phone`, declares a `phone` variable with the `<input>` element as its value. ``` <input #phone placeholder="phone number" /> ``` Refer to a template variable anywhere in the component's template. Here, a `<button>` further down the template refers to the `phone` variable. ``` <input #phone placeholder="phone number" /> <!-- lots of other elements --> <!-- phone refers to the input element; pass its `value` to an event handler --> <button type="button" (click)="callPhone(phone.value)">Call</button> ``` How Angular assigns values to template variables ------------------------------------------------ Angular assigns a template variable a value based on where you declare the variable: * If you declare the variable on a component, the variable refers to the component instance. * If you declare the variable on a standard HTML tag, the variable refers to the element. * If you declare the variable on an `[<ng-template>](../api/core/ng-template)` element, the variable refers to a `[TemplateRef](../api/core/templateref)` instance which represents the template. For more information on `[<ng-template>](../api/core/ng-template)`, see [How Angular uses the asterisk, `*`, syntax](structural-directives#asterisk) in [Structural directives](structural-directives). Variable specifying a name -------------------------- * If the variable specifies a name on the right-hand side, such as `#var="[ngModel](../api/forms/ngmodel)"`, the variable refers to the directive or component on the element with a matching `exportAs` name. ### Using `[NgForm](../api/forms/ngform)` with template variables In most cases, Angular sets the template variable's value to the element on which it occurs. In the previous example, `phone` refers to the phone number `<input>`. The button's click handler passes the `<input>` value to the component's `callPhone()` method. The `[NgForm](../api/forms/ngform)` directive demonstrates getting a reference to a different value by referencing a directive's `exportAs` name. In the following example, the template variable, `itemForm`, appears three times separated by HTML. ``` <form #itemForm="ngForm" (ngSubmit)="onSubmit(itemForm)"> <label for="name">Name</label> <input type="text" id="name" class="form-control" name="name" ngModel required /> <button type="submit">Submit</button> </form> <div [hidden]="!itemForm.form.valid"> <p>{{ submitMessage }}</p> </div> ``` Without the `[ngForm](../api/forms/ngform)` attribute value, the reference value of `itemForm` would be the [HTMLFormElement](https://developer.mozilla.org/en-US/docs/Web/API/HTMLFormElement), `<form>`. If an element is an Angular Component, a reference with no attribute value will automatically reference the component instance. Otherwise, a reference with no value will reference the DOM element, even if the element has one or more directives applied to it. Template variable scope ----------------------- Just like variables in JavaScript or TypeScript code, template variables are scoped to the template that declares them. Similarly, [Structural directives](built-in-directives) such as `*[ngIf](../api/common/ngif)` and `*[ngFor](../api/common/ngfor)`, or `[<ng-template>](../api/core/ng-template)` declarations create a new nested template scope, much like JavaScript's control flow statements like `if` and `for` create new lexical scopes. You cannot access template variables within one of these structural directives from outside of its boundaries. > Define a variable only once in the template so the runtime value remains predictable. > > ### Accessing in a nested template An inner template can access template variables that the outer template defines. In the following example, changing the text in the `<input>` changes the value in the `<span>` because Angular immediately updates changes through the template variable, `ref1`. ``` <input #ref1 type="text" [(ngModel)]="firstExample" /> <span *ngIf="true">Value: {{ ref1.value }}</span> ``` In this case, the `*[ngIf](../api/common/ngif)` on `<span>` creates a new template scope, which includes the `ref1` variable from its parent scope. However, accessing a template variable from a child scope in the parent template doesn't work: ``` <input *ngIf="true" #ref2 type="text" [(ngModel)]="secondExample" /> <span>Value: {{ ref2?.value }}</span> <!-- doesn't work --> ``` Here, `ref2` is declared in the child scope created by `*[ngIf](../api/common/ngif)`, and is not accessible from the parent template. Template input variable ----------------------- A *template input variable* is a variable with a value that is set when an instance of that template is created. See: [Writing structural directives](structural-directives) Template input variables can be seen in action in the long-form usage of `[NgFor](../api/common/ngfor)`: ``` <ul> <ng-template ngFor let-hero [ngForOf]="heroes"> <li>{{hero.name}} </ng-template> </ul> ``` The `[NgFor](../api/common/ngfor)` directive will instantiate this once for each hero in the `heroes` array, and will set the `hero` variable for each instance accordingly. When an `[<ng-template>](../api/core/ng-template)` is instantiated, multiple named values can be passed which can be bound to different template input variables. The right-hand side of the `let-` declaration of an input variable can specify which value should be used for that variable. `[NgFor](../api/common/ngfor)` for example also provides access to the `index` of each hero in the array: ``` <ul> <ng-template ngFor let-hero let-i="index" [ngForOf]="heroes"> <li>Hero number {{i}}: {{hero.name}} </ng-template> </ul> ``` What’s next ----------- [Writing structural directives](structural-directives) Last reviewed on Thu May 12 2022 angular Tour of Heroes application and tutorial Tour of Heroes application and tutorial ======================================= In this tutorial, you build your own Angular application from the start. This is a good way to experience a typical development process as you learn Angular application-design concepts, tools, and terminology. If you're new to Angular, try the [**Try it now**](start) quick-start application first. **Try it now** is based on a ready-made partially completed project. You can edit the application in StackBlitz and see the results in real time. **Try it now** covers the same major topics —components, template syntax, routing, services, and accessing data using HTTP— in a condensed format, following best practices. This *Tour of Heroes* tutorial provides an introduction to the fundamentals of Angular and shows you how to: * Set up your local Angular development environment. * Use the [Angular CLI](cli "CLI command reference") to develop an application The *Tour of Heroes* application that you build helps a staffing agency manage its stable of heroes. The application has many of the features that you'd expect to find in any data-driven application. The finished application: * Gets a list of heroes * Displays the heroes in a list * Edits a selected hero's details * Navigates between different views of heroic data This tutorial helps you gain confidence that Angular can do whatever you need it to do by showing you how to: * Use Angular [directives](../guide/glossary#directive "Directives definition") to show and hide elements and display lists of hero data. * Create Angular [components](../guide/glossary#component "Components definition") to display hero details and show an array of heroes. * Use one-way [data binding](../guide/glossary#data-binding "Data binding definition") for read-only data. * Add editable fields to update a model with two-way data binding. * Bind component methods to user events, like keystrokes and clicks. * Enable users to select a hero from a list and edit that hero in the details view. * Format data with [pipes](../guide/glossary#pipe "Pipe definition"). * Create a shared [service](../guide/glossary#service "Service definition") to assemble the heroes. * Use [routing](../guide/glossary#router "Router definition") to navigate among different views and their components. After you complete all tutorial steps, the final application looks like this example. . Design your new application --------------------------- Here's an image of where this tutorial leads, showing the Dashboard view and the most heroic heroes: You can click the **Dashboard** and **Heroes** links in the dashboard to navigate between the views. If you click the dashboard hero "Magneta," the router opens a "Hero Details" view where you can change the hero's name. Clicking the "Back" button returns you to the Dashboard. Links at the top take you to either of the main views. If you click "Heroes," the application displays the "Heroes" list view. When you click a different hero name, the read-only mini detail beneath the list reflects the new choice. You can click the "View Details" button to drill into the editable details of the selected hero. The following diagram illustrates the navigation options. Here's the application in action: ![Tour of Heroes in Action](https://angular.io/generated/images/guide/toh/toh-anim.gif) Last reviewed on Mon May 16 2022 angular Add services Add services ============ The Tour of Heroes `HeroesComponent` is getting and displaying fake data. Refactoring the `HeroesComponent` focuses on supporting the view and making it easier to unit-test with a mock service. > For the sample application that this page describes, see the live example. > > Why services ------------ Components shouldn't fetch or save data directly and they certainly shouldn't knowingly present fake data. They should focus on presenting data and delegate data access to a service. This tutorial creates a `HeroService` that all application classes can use to get heroes. Instead of creating that service with the [`new` keyword](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators/new), use the [*dependency injection*](../../guide/dependency-injection) that Angular supports to inject it into the `HeroesComponent` constructor. Services are a great way to share information among classes that *don't know each other*. Create a `MessageService` next and inject it in these two places. * Inject in `HeroService`, which uses the service to send a message * Inject in `MessagesComponent`, which displays that message, and also displays the ID when the user clicks a hero Create the `HeroService` ------------------------ Run `ng generate` to create a service called `hero`. ``` ng generate service hero ``` The command generates a skeleton `HeroService` class in `src/app/hero.service.ts` as follows: ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root', }) export class HeroService { constructor() { } } ``` ### `@[Injectable](../../api/core/injectable)()` services Notice that the new service imports the Angular `[Injectable](../../api/core/injectable)` symbol and annotates the class with the `@[Injectable](../../api/core/injectable)()` decorator. This marks the class as one that participates in the *dependency injection system*. The `HeroService` class is going to provide an injectable service, and it can also have its own injected dependencies. It doesn't have any dependencies yet. The `@[Injectable](../../api/core/injectable)()` decorator accepts a metadata object for the service, the same way the `@[Component](../../api/core/component)()` decorator did for your component classes. ### Get hero data The `HeroService` could get hero data from anywhere such as a web service, local storage, or a mock data source. Removing data access from components means you can change your mind about the implementation anytime, without touching any components. They don't know how the service works. The implementation in *this* tutorial continues to deliver *mock heroes*. Import the `Hero` and `HEROES`. ``` import { Hero } from './hero'; import { HEROES } from './mock-heroes'; ``` Add a `getHeroes` method to return the *mock heroes*. ``` getHeroes(): Hero[] { return HEROES; } ``` Provide the `HeroService` ------------------------- You must make the `HeroService` available to the dependency injection system before Angular can *inject* it into the `HeroesComponent` by registering a *provider*. A provider is something that can create or deliver a service. In this case, it instantiates the `HeroService` class to provide the service. To make sure that the `HeroService` can provide this service, register it with the *injector*. The *injector* is the object that chooses and injects the provider where the application requires it. By default, `ng generate service` registers a provider with the *root injector* for your service by including provider metadata, that's `providedIn: 'root'` in the `@[Injectable](../../api/core/injectable)()` decorator. ``` @Injectable({ providedIn: 'root', }) ``` When you provide the service at the root level, Angular creates a single, shared instance of `HeroService` and injects into any class that asks for it. Registering the provider in the `@[Injectable](../../api/core/injectable)` metadata also allows Angular to optimize an application by removing the service if it isn't used. > To learn more about providers, see the [Providers section](../../guide/providers). To learn more about injectors, see the [Dependency Injection guide](../../guide/dependency-injection). > > The `HeroService` is now ready to plug into the `HeroesComponent`. > This is an interim code sample that allows you to provide and use the `HeroService`. At this point, the code differs from the `HeroService` in the [final code review](toh-pt4#final-code-review). > > Update `HeroesComponent` ------------------------ Open the `HeroesComponent` class file. Delete the `HEROES` import, because you won't need that anymore. Import the `HeroService` instead. ``` import { HeroService } from '../hero.service'; ``` Replace the definition of the `heroes` property with a declaration. ``` heroes: Hero[] = []; ``` ### Inject the `HeroService` Add a private `heroService` parameter of type `HeroService` to the constructor. ``` constructor(private heroService: HeroService) {} ``` The parameter simultaneously defines a private `heroService` property and identifies it as a `HeroService` injection site. When Angular creates a `HeroesComponent`, the [Dependency Injection](../../guide/dependency-injection) system sets the `heroService` parameter to the singleton instance of `HeroService`. ### Add `getHeroes()` Create a method to retrieve the heroes from the service. ``` getHeroes(): void { this.heroes = this.heroService.getHeroes(); } ``` ### Call it in `ngOnInit()` While you could call `getHeroes()` in the constructor, that's not the best practice. Reserve the constructor for minimal initialization such as wiring constructor parameters to properties. The constructor shouldn't *do anything*. It certainly shouldn't call a function that makes HTTP requests to a remote server as a *real* data service would. Instead, call `getHeroes()` inside the [*ngOnInit lifecycle hook*](../../guide/lifecycle-hooks) and let Angular call `ngOnInit()` at an appropriate time *after* constructing a `HeroesComponent` instance. ``` ngOnInit(): void { this.getHeroes(); } ``` ### See it run After the browser refreshes, the application should run as before, showing a list of heroes and a hero detail view when you click a hero name. Observable data --------------- The `HeroService.getHeroes()` method has a *synchronous signature*, which implies that the `HeroService` can fetch heroes synchronously. The `HeroesComponent` consumes the `getHeroes()` result as if heroes could be fetched synchronously. ``` this.heroes = this.heroService.getHeroes(); ``` This approach won't work in a real application that uses asynchronous calls. It works now because your service synchronously returns *mock heroes*. If `getHeroes()` can't return immediately with hero data, it shouldn't be synchronous, because that would block the browser as it waits to return data. `HeroService.getHeroes()` must have an *asynchronous signature* of some kind. In this tutorial, `HeroService.getHeroes()` returns an `Observable` so that it can use the Angular `HttpClient.get` method to fetch the heroes and have [`HttpClient.get()`](../../guide/http) return an `Observable`. ### Observable `HeroService` `Observable` is one of the key classes in the [RxJS library](https://rxjs.dev). In [the tutorial on HTTP](toh-pt6), you can see how Angular's `[HttpClient](../../api/common/http/httpclient)` methods return RxJS `Observable` objects. This tutorial simulates getting data from the server with the RxJS `of()` function. Open the `HeroService` file and import the `Observable` and `of` symbols from RxJS. ``` import { Observable, of } from 'rxjs'; ``` Replace the `getHeroes()` method with the following: ``` getHeroes(): Observable<Hero[]> { const heroes = of(HEROES); return heroes; } ``` `of(HEROES)` returns an `Observable<Hero[]>` that emits *a single value*, the array of mock heroes. > The [HTTP tutorial](toh-pt6) shows you how to call `HttpClient.get<Hero[]>()`, which also returns an `Observable<Hero[]>` that emits *a single value*, an array of heroes from the body of the HTTP response. > > ### Subscribe in `HeroesComponent` The `HeroService.getHeroes` method used to return a `Hero[]`. Now it returns an `Observable<Hero[]>`. You need to adjust your application to work with that change to `HeroesComponent`. Find the `getHeroes` method and replace it with the following code. the new code is shown side-by-side with the current version for comparison. ``` getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes); } ``` ``` getHeroes(): void { this.heroes = this.heroService.getHeroes(); } ``` `Observable.subscribe()` is the critical difference. The previous version assigns an array of heroes to the component's `heroes` property. The assignment occurs *synchronously*, as if the server could return heroes instantly or the browser could freeze the UI while it waited for the server's response. That *won't work* when the `HeroService` is actually making requests of a remote server. The new version waits for the `Observable` to emit the array of heroes, which could happen now or several minutes from now. The `subscribe()` method passes the emitted array to the callback, which sets the component's `heroes` property. This asynchronous approach *works* when the `HeroService` requests heroes from the server. Show messages ------------- This section guides you through the following: * Adding a `MessagesComponent` that displays application messages at the bottom of the screen * Creating an injectable, application-wide `MessageService` for sending messages to be displayed * Injecting `MessageService` into the `HeroService` * Displaying a message when `HeroService` fetches heroes successfully ### Create `MessagesComponent` Use `ng generate` to create the `MessagesComponent`. ``` ng generate component messages ``` `ng generate` creates the component files in the `src/app/messages` directory and declares the `MessagesComponent` in `AppModule`. Edit the `AppComponent` template to display the `MessagesComponent`. ``` <h1>{{title}}</h1> <app-heroes></app-heroes> <app-messages></app-messages> ``` You should see the default paragraph from `MessagesComponent` at the bottom of the page. ### Create the `MessageService` Use `ng generate` to create the `MessageService` in `src/app`. ``` ng generate service message ``` Open `MessageService` and replace its contents with the following. ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root', }) export class MessageService { messages: string[] = []; add(message: string) { this.messages.push(message); } clear() { this.messages = []; } } ``` The service exposes its cache of `messages` and two methods: * One to `add()` a message to the cache. * Another to `clear()` the cache. ### Inject it into the `HeroService` In `HeroService`, import the `MessageService`. ``` import { MessageService } from './message.service'; ``` Edit the constructor with a parameter that declares a private `messageService` property. Angular injects the singleton `MessageService` into that property when it creates the `HeroService`. ``` constructor(private messageService: MessageService) { } ``` > This is an example of a typical *service-in-service* scenario in which you inject the `MessageService` into the `HeroService` which is injected into the `HeroesComponent`. > > ### Send a message from `HeroService` Edit the `getHeroes()` method to send a message when the heroes are fetched. ``` getHeroes(): Observable<Hero[]> { const heroes = of(HEROES); this.messageService.add('HeroService: fetched heroes'); return heroes; } ``` ### Display the message from `HeroService` The `MessagesComponent` should display all messages, including the message sent by the `HeroService` when it fetches heroes. Open `MessagesComponent` and import the `MessageService`. ``` import { MessageService } from '../message.service'; ``` Edit the constructor with a parameter that declares a **public** `messageService` property. Angular injects the singleton `MessageService` into that property when it creates the `MessagesComponent`. ``` constructor(public messageService: MessageService) {} ``` The `messageService` property **must be public** because you're going to bind to it in the template. > Angular only binds to *public* component properties. > > ### Bind to the `MessageService` Replace the `MessagesComponent` template created by `ng generate` with the following. ``` <div *ngIf="messageService.messages.length"> <h2>Messages</h2> <button type="button" class="clear" (click)="messageService.clear()">Clear messages</button> <div *ngFor='let message of messageService.messages'> {{message}} </div> </div> ``` This template binds directly to the component's `messageService`. | | Details | | --- | --- | | `*[ngIf](../../api/common/ngif)` | Only displays the messages area if there are messages to show. | | `*[ngFor](../../api/common/ngfor)` | Presents the list of messages in repeated `<div>` elements. | | Angular [event binding](../../guide/event-binding) | Binds the button's click event to `MessageService.clear()`. | The messages look better after you add the private CSS styles to `messages.component.css` as listed in one of the ["final code review"](toh-pt4#final-code-review) tabs below. Add messages to hero service ---------------------------- The following example shows how to display a history of each time the user clicks on a hero. This helps when you get to the next section on [Routing](toh-pt5). ``` import { Component, OnInit } from '@angular/core'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; import { MessageService } from '../message.service'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent implements OnInit { selectedHero?: Hero; heroes: Hero[] = []; constructor(private heroService: HeroService, private messageService: MessageService) { } ngOnInit(): void { this.getHeroes(); } onSelect(hero: Hero): void { this.selectedHero = hero; this.messageService.add(`HeroesComponent: Selected hero id=${hero.id}`); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes); } } ``` Refresh the browser to see the list of heroes, and scroll to the bottom to see the messages from the HeroService. Each time you click a hero, a new message appears to record the selection. Use the **Clear messages** button to clear the message history. Final code review ----------------- Here are the code files discussed on this page. ``` import { Injectable } from '@angular/core'; import { Observable, of } from 'rxjs'; import { Hero } from './hero'; import { HEROES } from './mock-heroes'; import { MessageService } from './message.service'; @Injectable({ providedIn: 'root', }) export class HeroService { constructor(private messageService: MessageService) { } getHeroes(): Observable<Hero[]> { const heroes = of(HEROES); this.messageService.add('HeroService: fetched heroes'); return heroes; } } ``` ``` import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root', }) export class MessageService { messages: string[] = []; add(message: string) { this.messages.push(message); } clear() { this.messages = []; } } ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; import { MessageService } from '../message.service'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent implements OnInit { selectedHero?: Hero; heroes: Hero[] = []; constructor(private heroService: HeroService, private messageService: MessageService) { } ngOnInit(): void { this.getHeroes(); } onSelect(hero: Hero): void { this.selectedHero = hero; this.messageService.add(`HeroesComponent: Selected hero id=${hero.id}`); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes); } } ``` ``` import { Component } from '@angular/core'; import { MessageService } from '../message.service'; @Component({ selector: 'app-messages', templateUrl: './messages.component.html', styleUrls: ['./messages.component.css'] }) export class MessagesComponent { constructor(public messageService: MessageService) {} } ``` ``` <div *ngIf="messageService.messages.length"> <h2>Messages</h2> <button type="button" class="clear" (click)="messageService.clear()">Clear messages</button> <div *ngFor='let message of messageService.messages'> {{message}} </div> </div> ``` ``` /* MessagesComponent's private CSS styles */ h2 { color: #A80000; font-family: Arial, Helvetica, sans-serif; font-weight: lighter; } .clear { color: #333; background-color: #eee; margin-bottom: 12px; padding: 1rem; border-radius: 4px; font-size: 1rem; } .clear:hover { color: white; background-color: #42545C; } ``` ``` import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { HeroesComponent } from './heroes/heroes.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; import { MessagesComponent } from './messages/messages.component'; @NgModule({ declarations: [ AppComponent, HeroesComponent, HeroDetailComponent, MessagesComponent ], imports: [ BrowserModule, FormsModule ], providers: [ // no need to place any providers due to the `providedIn` flag... ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` <h1>{{title}}</h1> <app-heroes></app-heroes> <app-messages></app-messages> ``` Summary ------- * You refactored data access to the `HeroService` class. * You registered the `HeroService` as the *provider* of its service at the root level so that it can be injected anywhere in the application. * You used [Angular Dependency Injection](../../guide/dependency-injection) to inject it into a component. * You gave the `HeroService` `get data` method an asynchronous signature. * You discovered `Observable` and the RxJS `Observable` library. * You used RxJS `of()` to return `Observable<Hero[]>`, an observable of mock heroes. * The component's `ngOnInit` lifecycle hook calls the `HeroService` method, not the constructor. * You created a `MessageService` for loosely coupled communication between classes. * The `HeroService` injected into a component is created with another injected service, `MessageService`. Last reviewed on Mon Feb 28 2022
programming_docs
angular Add navigation with routing Add navigation with routing =========================== The Tour of Heroes application has new requirements: * Add a *Dashboard* view * Add the ability to navigate between the *Heroes* and *Dashboard* views * When users click a hero name in either view, navigate to a detail view of the selected hero * When users click a *deep link* in an email, open the detail view for a particular hero > For the sample application that this page describes, see the live example. > > When you're done, users can navigate the application like this: Add the `AppRoutingModule` -------------------------- In Angular, the best practice is to load and configure the router in a separate, top-level module. The router is dedicated to routing and imported by the root `AppModule`. By convention, the module class name is `AppRoutingModule` and it belongs in the `app-routing.module.ts` in the `src/app` directory. Run `ng generate` to create the application routing module. ``` ng generate module app-routing --flat --module=app ``` > > > | Parameter | Details | > | --- | --- | > | `--flat` | Puts the file in `src/app` instead of its own directory. | > | `--module=app` | Tells `ng generate` to register it in the `imports` array of the `AppModule`. | > > The file that `ng generate` creates looks like this: ``` import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; @NgModule({ imports: [ CommonModule ], declarations: [] }) export class AppRoutingModule { } ``` Replace it with the following: ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { HeroesComponent } from './heroes/heroes.component'; const routes: Routes = [ { path: 'heroes', component: HeroesComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { } ``` First, the `app-routing.module.ts` file imports `[RouterModule](../../api/router/routermodule)` and `[Routes](../../api/router/routes)` so the application can have routing capability. The next import, `HeroesComponent`, gives the Router somewhere to go once you configure the routes. Notice that the `[CommonModule](../../api/common/commonmodule)` references and `declarations` array are unnecessary, so are no longer part of `AppRoutingModule`. The following sections explain the rest of the `AppRoutingModule` in more detail. ### Routes The next part of the file is where you configure your routes. *Routes* tell the Router which view to display when a user clicks a link or pastes a URL into the browser address bar. Since `app-routing.module.ts` already imports `HeroesComponent`, you can use it in the `routes` array: ``` const routes: Routes = [ { path: 'heroes', component: HeroesComponent } ]; ``` A typical Angular `[Route](../../api/router/route)` has two properties: | Properties | Details | | --- | --- | | `path` | A string that matches the URL in the browser address bar. | | `component` | The component that the router should create when navigating to this route. | This tells the router to match that URL to `path: 'heroes'` and display the `HeroesComponent` when the URL is something like `localhost:4200/heroes`. ### `[RouterModule.forRoot()](../../api/router/routermodule#forRoot)` The `@[NgModule](../../api/core/ngmodule)` metadata initializes the router and starts it listening for browser location changes. The following line adds the `[RouterModule](../../api/router/routermodule)` to the `AppRoutingModule` `imports` array and configures it with the `routes` in one step by calling `[RouterModule.forRoot()](../../api/router/routermodule#forRoot)`: ``` imports: [ RouterModule.forRoot(routes) ], ``` > The method is called `forRoot()` because you configure the router at the application's root level. The `forRoot()` method supplies the service providers and directives needed for routing, and performs the initial navigation based on the current browser URL. > > Next, `AppRoutingModule` exports `[RouterModule](../../api/router/routermodule)` to be available throughout the application. ``` exports: [ RouterModule ] ``` Add `[RouterOutlet](../../api/router/routeroutlet)` --------------------------------------------------- Open the `AppComponent` template and replace the `<app-heroes>` element with a `<[router-outlet](../../api/router/routeroutlet)>` element. ``` <h1>{{title}}</h1> <router-outlet></router-outlet> <app-messages></app-messages> ``` The `AppComponent` template no longer needs `<app-heroes>` because the application only displays the `HeroesComponent` when the user navigates to it. The `<[router-outlet](../../api/router/routeroutlet)>` tells the router where to display routed views. > The `[RouterOutlet](../../api/router/routeroutlet)` is one of the router directives that became available to the `AppComponent` because `AppModule` imports `AppRoutingModule` which exported `[RouterModule](../../api/router/routermodule)`. The `ng generate` command you ran at the start of this tutorial added this import because of the `--module=app` flag. If you didn't use the `ng generate` command to create `app-routing.module.ts`, import `AppRoutingModule` into `app.module.ts` and add it to the `imports` array of the `[NgModule](../../api/core/ngmodule)`. > > #### Try it If you're not still serving your application, run `ng serve` to see your application in the browser. The browser should refresh and display the application title but not the list of heroes. Look at the browser's address bar. The URL ends in `/`. The route path to `HeroesComponent` is `/heroes`. Append `/heroes` to the URL in the browser address bar. You should see the familiar heroes overview/detail view. Remove `/heroes` from the URL in the browser address bar. The browser should refresh and display the application title but not the list of heroes. Add a navigation link using `[routerLink](../../api/router/routerlink)` ----------------------------------------------------------------------- Ideally, users should be able to click a link to navigate rather than pasting a route URL into the address bar. Add a `<nav>` element and, within that, an anchor element that, when clicked, triggers navigation to the `HeroesComponent`. The revised `AppComponent` template looks like this: ``` <h1>{{title}}</h1> <nav> <a routerLink="/heroes">Heroes</a> </nav> <router-outlet></router-outlet> <app-messages></app-messages> ``` A [`routerLink` attribute](toh-pt5#routerlink) is set to `"/heroes"`, the string that the router matches to the route to `HeroesComponent`. The `[routerLink](../../api/router/routerlink)` is the selector for the [`RouterLink` directive](../../api/router/routerlink) that turns user clicks into router navigations. It's another of the public directives in the `[RouterModule](../../api/router/routermodule)`. The browser refreshes and displays the application title and heroes link, but not the heroes list. Click the link. The address bar updates to `/heroes` and the list of heroes appears. > Make this and future navigation links look better by adding private CSS styles to `app.component.css` as listed in the [final code review](toh-pt5#appcomponent) below. > > Add a dashboard view -------------------- Routing makes more sense when your application has more than one view, yet the *Tour of Heroes* application has only the heroes view. To add a `DashboardComponent`, run `ng generate` as shown here: ``` ng generate component dashboard ``` `ng generate` creates the files for the `DashboardComponent` and declares it in `AppModule`. Replace the default content in these files as shown here: ``` <h2>Top Heroes</h2> <div class="heroes-menu"> <a *ngFor="let hero of heroes"> {{hero.name}} </a> </div> ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-dashboard', templateUrl: './dashboard.component.html', styleUrls: [ './dashboard.component.css' ] }) export class DashboardComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) { } ngOnInit(): void { this.getHeroes(); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes.slice(1, 5)); } } ``` ``` /* DashboardComponent's private CSS styles */ h2 { text-align: center; } .heroes-menu { padding: 0; margin: auto; max-width: 1000px; /* flexbox */ display: flex; flex-direction: row; flex-wrap: wrap; justify-content: space-around; align-content: flex-start; align-items: flex-start; } a { background-color: #3f525c; border-radius: 2px; padding: 1rem; font-size: 1.2rem; text-decoration: none; display: inline-block; color: #fff; text-align: center; width: 100%; min-width: 70px; margin: .5rem auto; box-sizing: border-box; /* flexbox */ order: 0; flex: 0 1 auto; align-self: auto; } @media (min-width: 600px) { a { width: 18%; box-sizing: content-box; } } a:hover { background-color: #000; } ``` The *template* presents a grid of hero name links. * The `*[ngFor](../../api/common/ngfor)` repeater creates as many links as are in the component's `heroes` array. * The links are styled as colored blocks by the `dashboard.component.css`. * The links don't go anywhere yet. The *class* is like the `HeroesComponent` class. * It defines a `heroes` array property * The constructor expects Angular to inject the `HeroService` into a private `heroService` property * The `ngOnInit()` lifecycle hook calls `getHeroes()` This `getHeroes()` returns the sliced list of heroes at positions 1 and 5, returning only Heroes two, three, four, and five. ``` getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes.slice(1, 5)); } ``` ### Add the dashboard route To navigate to the dashboard, the router needs an appropriate route. Import the `DashboardComponent` in the `app-routing-module.ts` file. ``` import { DashboardComponent } from './dashboard/dashboard.component'; ``` Add a route to the `routes` array that matches a path to the `DashboardComponent`. ``` { path: 'dashboard', component: DashboardComponent }, ``` ### Add a default route When the application starts, the browser's address bar points to the web site's root. That doesn't match any existing route so the router doesn't navigate anywhere. The space below the `<[router-outlet](../../api/router/routeroutlet)>` is blank. To make the application navigate to the dashboard automatically, add the following route to the `routes` array. ``` { path: '', redirectTo: '/dashboard', pathMatch: 'full' }, ``` This route redirects a URL that fully matches the empty path to the route whose path is `'/dashboard'`. After the browser refreshes, the router loads the `DashboardComponent` and the browser address bar shows the `/dashboard` URL. ### Add dashboard link to the shell The user should be able to navigate between the `DashboardComponent` and the `HeroesComponent` by clicking links in the navigation area near the top of the page. Add a dashboard navigation link to the `AppComponent` shell template, just above the *Heroes* link. ``` <h1>{{title}}</h1> <nav> <a routerLink="/dashboard">Dashboard</a> <a routerLink="/heroes">Heroes</a> </nav> <router-outlet></router-outlet> <app-messages></app-messages> ``` After the browser refreshes you can navigate freely between the two views by clicking the links. Navigating to hero details -------------------------- The `HeroDetailComponent` displays details of a selected hero. At the moment the `HeroDetailComponent` is only visible at the bottom of the `HeroesComponent` The user should be able to get to these details in three ways. 1. By clicking a hero in the dashboard. 2. By clicking a hero in the heroes list. 3. By pasting a "deep link" URL into the browser address bar that identifies the hero to display. This section enables navigation to the `HeroDetailComponent` and liberates it from the `HeroesComponent`. ### Delete `hero details` from `HeroesComponent` When the user clicks a hero in `HeroesComponent`, the application should navigate to the `HeroDetailComponent`, replacing the heroes list view with the hero detail view. The heroes list view should no longer show hero details as it does now. Open the `heroes/heroes.component.html` and delete the `<app-hero-detail>` element from the bottom. Clicking a hero item now does nothing. You can fix that after you enable routing to the `HeroDetailComponent`. ### Add a `hero detail` route A URL like `~/detail/11` would be a good URL for navigating to the *Hero Detail* view of the hero whose `id` is `11`. Open `app-routing.module.ts` and import `HeroDetailComponent`. ``` import { HeroDetailComponent } from './hero-detail/hero-detail.component'; ``` Then add a *parameterized* route to the `routes` array that matches the path pattern to the *hero detail* view. ``` { path: 'detail/:id', component: HeroDetailComponent }, ``` The colon `:` character in the `path` indicates that `:id` is a placeholder for a specific hero `id`. At this point, all application routes are in place. ``` const routes: Routes = [ { path: '', redirectTo: '/dashboard', pathMatch: 'full' }, { path: 'dashboard', component: DashboardComponent }, { path: 'detail/:id', component: HeroDetailComponent }, { path: 'heroes', component: HeroesComponent } ]; ``` ### `DashboardComponent` hero links The `DashboardComponent` hero links do nothing at the moment. Now that the router has a route to `HeroDetailComponent`, fix the dashboard hero links to navigate using the *parameterized* dashboard route. ``` <a *ngFor="let hero of heroes" routerLink="/detail/{{hero.id}}"> {{hero.name}} </a> ``` You're using Angular [interpolation binding](../../guide/interpolation) within the `*[ngFor](../../api/common/ngfor)` repeater to insert the current iteration's `hero.id` into each [`routerLink`](toh-pt5#routerlink). ### `HeroesComponent` hero links The hero items in the `HeroesComponent` are `<li>` elements whose click events are bound to the component's `onSelect()` method. ``` <ul class="heroes"> <li *ngFor="let hero of heroes"> <button type="button" (click)="onSelect(hero)" [class.selected]="hero === selectedHero"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> </ul> ``` Remove the inner HTML of `<li>`. Wrap the badge and name in an anchor `<a>` element. Add a `[routerLink](../../api/router/routerlink)` attribute to the anchor that's the same as in the dashboard template. ``` <ul class="heroes"> <li *ngFor="let hero of heroes"> <a routerLink="/detail/{{hero.id}}"> <span class="badge">{{hero.id}}</span> {{hero.name}} </a> </li> </ul> ``` Be sure to fix the private style sheet in `heroes.component.css` to make the list look as it did before. Revised styles are in the [final code review](toh-pt5#heroescomponent) at the bottom of this guide. #### Remove dead code - optional While the `HeroesComponent` class still works, the `onSelect()` method and `selectedHero` property are no longer used. It's nice to tidy things up for your future self. Here's the class after pruning away the dead code. ``` export class HeroesComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) { } ngOnInit(): void { this.getHeroes(); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes); } } ``` Routable `HeroDetailComponent` ------------------------------ The parent `HeroesComponent` used to set the `HeroDetailComponent.hero` property and the `HeroDetailComponent` displayed the hero. `HeroesComponent` doesn't do that anymore. Now the router creates the `HeroDetailComponent` in response to a URL such as `~/detail/12`. The `HeroDetailComponent` needs a new way to get the hero to display. This section explains the following: * Get the route that created it * Extract the `id` from the route * Get the hero with that `id` from the server using the `HeroService` Add the following imports: ``` import { ActivatedRoute } from '@angular/router'; import { Location } from '@angular/common'; import { HeroService } from '../hero.service'; ``` Inject the `[ActivatedRoute](../../api/router/activatedroute)`, `HeroService`, and `[Location](../../api/common/location)` services into the constructor, saving their values in private fields: ``` constructor( private route: ActivatedRoute, private heroService: HeroService, private location: Location ) {} ``` The [`ActivatedRoute`](../../api/router/activatedroute) holds information about the route to this instance of the `HeroDetailComponent`. This component is interested in the route's parameters extracted from the URL. The "id" parameter is the `id` of the hero to display. The [`HeroService`](toh-pt4) gets hero data from the remote server and this component uses it to get the hero-to-display. The [`location`](../../api/common/location) is an Angular service for interacting with the browser. This service lets you navigate back to the previous view. ### Extract the `id` route parameter In the `ngOnInit()` [lifecycle hook](../../guide/lifecycle-hooks#oninit) call `getHero()` and define it as follows. ``` ngOnInit(): void { this.getHero(); } getHero(): void { const id = Number(this.route.snapshot.paramMap.get('id')); this.heroService.getHero(id) .subscribe(hero => this.hero = hero); } ``` The `route.snapshot` is a static image of the route information shortly after the component was created. The `paramMap` is a dictionary of route parameter values extracted from the URL. The `"id"` key returns the `id` of the hero to fetch. Route parameters are always strings. The JavaScript `Number` function converts the string to a number, which is what a hero `id` should be. The browser refreshes and the application crashes with a compiler error. `HeroService` doesn't have a `getHero()` method. Add it now. ### Add `HeroService.getHero()` Open `HeroService` and add the following `getHero()` method with the `id` after the `getHeroes()` method: ``` getHero(id: number): Observable<Hero> { // For now, assume that a hero with the specified `id` always exists. // Error handling will be added in the next step of the tutorial. const hero = HEROES.find(h => h.id === id)!; this.messageService.add(`HeroService: fetched hero id=${id}`); return of(hero); } ``` > **IMPORTANT**: The backtick ( ``` ) characters define a JavaScript [template literal](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Template_literals) for embedding the `id`. > > Like [`getHeroes()`](toh-pt4#observable-heroservice), `getHero()` has an asynchronous signature. It returns a *mock hero* as an `Observable`, using the RxJS `of()` function. You can rewrite `getHero()` as a real `Http` request without having to change the `HeroDetailComponent` that calls it. #### Try it The browser refreshes and the application is working again. You can click a hero in the dashboard or in the heroes list and navigate to that hero's detail view. If you paste `localhost:4200/detail/12` in the browser address bar, the router navigates to the detail view for the hero with `id: 12`, **Dr Nice**. ### Find the way back By clicking the browser's back button, you can go back to the previous page. This could be the hero list or dashboard view, depending upon which sent you to the detail view. It would be nice to have a button on the `HeroDetail` view that can do that. Add a *go back* button to the bottom of the component template and bind it to the component's `goBack()` method. ``` <button type="button" (click)="goBack()">go back</button> ``` Add a `goBack()` *method* to the component class that navigates backward one step in the browser's history stack using the `[Location](../../api/common/location)` service that you [used to inject](toh-pt5#hero-detail-ctor). ``` goBack(): void { this.location.back(); } ``` Refresh the browser and start clicking. Users can now navigate around the application using the new buttons. The details look better when you add the private CSS styles to `hero-detail.component.css` as listed in one of the ["final code review"](toh-pt5#final-code-review) tabs below. Final code review ----------------- Here are the code files discussed on this page. #### `AppRoutingModule`, `AppModule`, and `HeroService` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { DashboardComponent } from './dashboard/dashboard.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; import { HeroesComponent } from './heroes/heroes.component'; import { MessagesComponent } from './messages/messages.component'; import { AppRoutingModule } from './app-routing.module'; @NgModule({ imports: [ BrowserModule, FormsModule, AppRoutingModule ], declarations: [ AppComponent, DashboardComponent, HeroesComponent, HeroDetailComponent, MessagesComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ``` import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { DashboardComponent } from './dashboard/dashboard.component'; import { HeroesComponent } from './heroes/heroes.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; const routes: Routes = [ { path: '', redirectTo: '/dashboard', pathMatch: 'full' }, { path: 'dashboard', component: DashboardComponent }, { path: 'detail/:id', component: HeroDetailComponent }, { path: 'heroes', component: HeroesComponent } ]; @NgModule({ imports: [ RouterModule.forRoot(routes) ], exports: [ RouterModule ] }) export class AppRoutingModule {} ``` ``` import { Injectable } from '@angular/core'; import { Observable, of } from 'rxjs'; import { Hero } from './hero'; import { HEROES } from './mock-heroes'; import { MessageService } from './message.service'; @Injectable({ providedIn: 'root' }) export class HeroService { constructor(private messageService: MessageService) { } getHeroes(): Observable<Hero[]> { const heroes = of(HEROES); this.messageService.add('HeroService: fetched heroes'); return heroes; } getHero(id: number): Observable<Hero> { // For now, assume that a hero with the specified `id` always exists. // Error handling will be added in the next step of the tutorial. const hero = HEROES.find(h => h.id === id)!; this.messageService.add(`HeroService: fetched hero id=${id}`); return of(hero); } } ``` #### `AppComponent` ``` <h1>{{title}}</h1> <nav> <a routerLink="/dashboard">Dashboard</a> <a routerLink="/heroes">Heroes</a> </nav> <router-outlet></router-outlet> <app-messages></app-messages> ``` ``` import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Tour of Heroes'; } ``` ``` /* AppComponent's private CSS styles */ h1 { margin-bottom: 0; } nav a { padding: 1rem; text-decoration: none; margin-top: 10px; display: inline-block; background-color: #e8e8e8; color: #3d3d3d; border-radius: 4px; } nav a:hover { color: white; background-color: #42545C; } nav a:active { background-color: black; } ``` #### `DashboardComponent` ``` <h2>Top Heroes</h2> <div class="heroes-menu"> <a *ngFor="let hero of heroes" routerLink="/detail/{{hero.id}}"> {{hero.name}} </a> </div> ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-dashboard', templateUrl: './dashboard.component.html', styleUrls: [ './dashboard.component.css' ] }) export class DashboardComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) { } ngOnInit(): void { this.getHeroes(); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes.slice(1, 5)); } } ``` ``` /* DashboardComponent's private CSS styles */ h2 { text-align: center; } .heroes-menu { padding: 0; margin: auto; max-width: 1000px; /* flexbox */ display: flex; flex-direction: row; flex-wrap: wrap; justify-content: space-around; align-content: flex-start; align-items: flex-start; } a { background-color: #3f525c; border-radius: 2px; padding: 1rem; font-size: 1.2rem; text-decoration: none; display: inline-block; color: #fff; text-align: center; width: 100%; min-width: 70px; margin: .5rem auto; box-sizing: border-box; /* flexbox */ order: 0; flex: 0 1 auto; align-self: auto; } @media (min-width: 600px) { a { width: 18%; box-sizing: content-box; } } a:hover { background-color: #000; } ``` #### `HeroesComponent` ``` <h2>My Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes"> <a routerLink="/detail/{{hero.id}}"> <span class="badge">{{hero.id}}</span> {{hero.name}} </a> </li> </ul> ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) { } ngOnInit(): void { this.getHeroes(); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes); } } ``` ``` /* HeroesComponent's private CSS styles */ .heroes { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .heroes li { position: relative; cursor: pointer; } .heroes li:hover { left: .1em; } .heroes a { color: #333; text-decoration: none; background-color: #EEE; margin: .5em; padding: .3em 0; height: 1.6em; border-radius: 4px; display: block; width: 100%; } .heroes a:hover { color: #2c3a41; background-color: #e6e6e6; } .heroes a:active { background-color: #525252; color: #fafafa; } .heroes .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #405061; line-height: 1em; position: relative; left: -1px; top: -4px; height: 1.8em; min-width: 16px; text-align: right; margin-right: .8em; border-radius: 4px 0 0 4px; } ``` #### `HeroDetailComponent` ``` <div *ngIf="hero"> <h2>{{hero.name | uppercase}} Details</h2> <div><span>id: </span>{{hero.id}}</div> <div> <label for="hero-name">Hero name: </label> <input id="hero-name" [(ngModel)]="hero.name" placeholder="Hero name"/> </div> <button type="button" (click)="goBack()">go back</button> </div> ``` ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { Location } from '@angular/common'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-hero-detail', templateUrl: './hero-detail.component.html', styleUrls: [ './hero-detail.component.css' ] }) export class HeroDetailComponent implements OnInit { hero: Hero | undefined; constructor( private route: ActivatedRoute, private heroService: HeroService, private location: Location ) {} ngOnInit(): void { this.getHero(); } getHero(): void { const id = Number(this.route.snapshot.paramMap.get('id')); this.heroService.getHero(id) .subscribe(hero => this.hero = hero); } goBack(): void { this.location.back(); } } ``` ``` /* HeroDetailComponent's private CSS styles */ label { color: #435960; font-weight: bold; } input { font-size: 1em; padding: .5rem; } button { margin-top: 20px; background-color: #eee; padding: 1rem; border-radius: 4px; font-size: 1rem; } button:hover { background-color: #cfd8dc; } button:disabled { background-color: #eee; color: #ccc; cursor: auto; } ``` Summary ------- * You added the Angular router to navigate among different components * You turned the `AppComponent` into a navigation shell with `<a>` links and a `<[router-outlet](../../api/router/routeroutlet)>` * You configured the router in an `AppRoutingModule` * You defined routes, a redirect route, and a parameterized route * You used the `[routerLink](../../api/router/routerlink)` directive in anchor elements * You refactored a tightly coupled main/detail view into a routed detail view * You used router link parameters to navigate to the detail view of a user-selected hero * You shared the `HeroService` with other components Last reviewed on Mon Feb 28 2022
programming_docs
angular Display a selection list Display a selection list ======================== This tutorial shows you how to: * Expand the Tour of Heroes application to display a list of heroes. * Allow users to select a hero and display the hero's details. > For the sample application that this page describes, see the live example. > > Create mock heroes ------------------ The first step is to create some heroes to display. Create a file called `mock-heroes.ts` in the `src/app/` directory. Define a `HEROES` constant as an array of ten heroes and export it. The file should look like this. ``` import { Hero } from './hero'; export const HEROES: Hero[] = [ { id: 12, name: 'Dr. Nice' }, { id: 13, name: 'Bombasto' }, { id: 14, name: 'Celeritas' }, { id: 15, name: 'Magneta' }, { id: 16, name: 'RubberMan' }, { id: 17, name: 'Dynama' }, { id: 18, name: 'Dr. IQ' }, { id: 19, name: 'Magma' }, { id: 20, name: 'Tornado' } ]; ``` Displaying heroes ----------------- Open the `HeroesComponent` class file and import the mock `HEROES`. ``` import { HEROES } from '../mock-heroes'; ``` In `HeroesComponent` class, define a component property called `heroes` to expose the `HEROES` array for binding. ``` export class HeroesComponent { heroes = HEROES; } ``` ### List heroes with `*[ngFor](../../api/common/ngfor)` Open the `HeroesComponent` template file and make the following changes: 1. Add an `<h2>` at the top. 2. Below the `<h2>`, add a `<ul>` element. 3. In the `<ul>` element, insert an `<li>`. 4. Place a `<button>` inside the `<li>` that displays properties of a `hero` inside `<span>` elements. 5. Add CSS classes to style the component. to look like this: ``` <h2>My Heroes</h2> <ul class="heroes"> <li> <button type="button"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> </ul> ``` That displays an error since the `hero` property doesn't exist. To have access to each individual hero and list them all, add an `*[ngFor](../../api/common/ngfor)` to the `<li>` to iterate through the list of heroes: ``` <li *ngFor="let hero of heroes"> ``` The [`*ngFor`](../../guide/built-in-directives#ngFor) is Angular's *repeater* directive. It repeats the host element for each element in a list. The syntax in this example is as follows: | Syntax | Details | | --- | --- | | `<li>` | The host element. | | `heroes` | Holds the mock heroes list from the `HeroesComponent` class, the mock heroes list. | | `hero` | Holds the current hero object for each iteration through the list. | > Don't forget to put the asterisk `*` in front of `[ngFor](../../api/common/ngfor)`. It's a critical part of the syntax. > > After the browser refreshes, the list of heroes appears. Inside the `<li>` element, add a `<button>` element to wrap the hero's details, and then make the hero clickable. To improve accessibility, use HTML elements that are inherently interactive instead of adding an event listener to a non-interactive element. In this case, the interactive `<button>` element is used instead of adding an event to the `<li>` element. For more details on accessibility, see [Accessibility in Angular](../../guide/accessibility). ### Style the heroes The heroes list should be attractive and should respond visually when users hover over and select a hero from the list. In the [first tutorial](toh-pt0#app-wide-styles), you set the basic styles for the entire application in `styles.css`. That style sheet didn't include styles for this list of heroes. You could add more styles to `styles.css` and keep growing that style sheet as you add components. You may prefer instead to define private styles for a specific component. This keeps everything a component needs, such as the code, the HTML, and the CSS, together in one place. This approach makes it easier to re-use the component somewhere else and deliver the component's intended appearance even if the global styles are different. You define private styles either inline in the `@[Component.styles](../../api/core/component#styles)` array or as style sheet files identified in the `@[Component.styleUrls](../../api/core/component#styleUrls)` array. When the `ng generate` created the `HeroesComponent`, it created an empty `heroes.component.css` style sheet for the `HeroesComponent` and pointed to it in `@[Component.styleUrls](../../api/core/component#styleUrls)` like this. ``` @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) ``` Open the `heroes.component.css` file and paste in the private CSS styles for the `HeroesComponent` from the [final code review](toh-pt2#final-code-review). > Styles and style sheets identified in `@[Component](../../api/core/component)` metadata are scoped to that specific component. The `heroes.component.css` styles apply only to the `HeroesComponent` and don't affect the outer HTML or the HTML in any other component. > > Viewing details --------------- When the user clicks a hero in the list, the component should display the selected hero's details at the bottom of the page. The code in this section listens for the hero item click event and display/update the hero details. ### Add a click event binding Add a click event binding to the `<button>` in the `<li>` like this: ``` <li *ngFor="let hero of heroes"> <button type="button" (click)="onSelect(hero)"> <!-- ... --> ``` This is an example of Angular's [event binding](../../guide/event-binding) syntax. The parentheses around `click` tell Angular to listen for the `<button>` element's `click` event. When the user clicks in the `<button>`, Angular executes the `onSelect(hero)` expression. In the next section, define an `onSelect()` method in `HeroesComponent` to display the hero that was defined in the `*[ngFor](../../api/common/ngfor)` expression. ### Add the click event handler Rename the component's `hero` property to `selectedHero` but don't assign any value to it since there is no *selected hero* when the application starts. Add the following `onSelect()` method, which assigns the clicked hero from the template to the component's `selectedHero`. ``` selectedHero?: Hero; onSelect(hero: Hero): void { this.selectedHero = hero; } ``` ### Add a details section Currently, you have a list in the component template. To show details about a hero when you click their name in the list, add a section in the template that displays their details. Add the following to `heroes.component.html` beneath the list section: ``` <div *ngIf="selectedHero"> <h2>{{selectedHero.name | uppercase}} Details</h2> <div>id: {{selectedHero.id}}</div> <div> <label for="hero-name">Hero name: </label> <input id="hero-name" [(ngModel)]="selectedHero.name" placeholder="name"> </div> </div> ``` The hero details should only be displayed when a hero is selected. When a component is created initially, there is no selected hero. Add the `*[ngIf](../../api/common/ngif)` directive to the `<div>` that wraps the hero details. This directive tells Angular to render the section only when the `selectedHero` is defined after it has been selected by clicking on a hero. > Don't forget the asterisk `*` character in front of `[ngIf](../../api/common/ngif)`. It's a critical part of the syntax. > > ### Style the selected hero To help identify the selected hero, you can use the `.selected` CSS class in the [styles you added earlier](toh-pt2#styles). To apply the `.selected` class to the `<li>` when the user clicks it, use class binding. Angular's [class binding](../../guide/class-binding) can add and remove a CSS class conditionally. Add `[class.some-css-class]="some-condition"` to the element you want to style. Add the following `[class.selected]` binding to the `<button>` in the `HeroesComponent` template: ``` [class.selected]="hero === selectedHero" ``` When the current row hero is the same as the `selectedHero`, Angular adds the `selected` CSS class. When the two heroes are different, Angular removes the class. The finished `<li>` looks like this: ``` <li *ngFor="let hero of heroes"> <button [class.selected]="hero === selectedHero" type="button" (click)="onSelect(hero)"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> ``` Final code review ----------------- Here are the code files discussed on this page, including the `HeroesComponent` styles. ``` import { Hero } from './hero'; export const HEROES: Hero[] = [ { id: 12, name: 'Dr. Nice' }, { id: 13, name: 'Bombasto' }, { id: 14, name: 'Celeritas' }, { id: 15, name: 'Magneta' }, { id: 16, name: 'RubberMan' }, { id: 17, name: 'Dynama' }, { id: 18, name: 'Dr. IQ' }, { id: 19, name: 'Magma' }, { id: 20, name: 'Tornado' } ]; ``` ``` import { Component } from '@angular/core'; import { Hero } from '../hero'; import { HEROES } from '../mock-heroes'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent { heroes = HEROES; selectedHero?: Hero; onSelect(hero: Hero): void { this.selectedHero = hero; } } ``` ``` <h2>My Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes"> <button [class.selected]="hero === selectedHero" type="button" (click)="onSelect(hero)"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> </ul> <div *ngIf="selectedHero"> <h2>{{selectedHero.name | uppercase}} Details</h2> <div>id: {{selectedHero.id}}</div> <div> <label for="hero-name">Hero name: </label> <input id="hero-name" [(ngModel)]="selectedHero.name" placeholder="name"> </div> </div> ``` ``` /* HeroesComponent's private CSS styles */ .heroes { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .heroes li { display: flex; } .heroes button { flex: 1; cursor: pointer; position: relative; left: 0; background-color: #EEE; margin: .5em; padding: 0; border-radius: 4px; display: flex; align-items: stretch; height: 1.8em; } .heroes button:hover { color: #2c3a41; background-color: #e6e6e6; left: .1em; } .heroes button:active { background-color: #525252; color: #fafafa; } .heroes button.selected { background-color: black; color: white; } .heroes button.selected:hover { background-color: #505050; color: white; } .heroes button.selected:active { background-color: black; color: white; } .heroes .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #405061; line-height: 1em; margin-right: .8em; border-radius: 4px 0 0 4px; } .heroes .name { align-self: center; } ``` Summary ------- * The Tour of Heroes application displays a list of heroes with a detail view. * The user can select a hero and see that hero's details. * You used `*[ngFor](../../api/common/ngfor)` to display a list. * You used `*[ngIf](../../api/common/ngif)` to conditionally include or exclude a block of HTML. * You can toggle a CSS style class with a `class` binding. Last reviewed on Mon May 23 2022 angular Create a feature component Create a feature component ========================== At the moment, the `HeroesComponent` displays both the list of heroes and the selected hero's details. Keeping all features in one component as the application grows won't be maintainable. This tutorial splits up large components into smaller subcomponents, each focused on a specific task or workflow. The first step is to move the hero details into a separate, reusable `HeroDetailComponent` and end up with: * A `HeroesComponent` that presents the list of heroes. * A `HeroDetailComponent` that presents the details of a selected hero. > For the sample application that this page describes, see the live example. > > Make the `HeroDetailComponent` ------------------------------ Use this `ng generate` command to create a new component named `hero-detail`. ``` ng generate component hero-detail ``` The command scaffolds the following: * Creates a directory `src/app/hero-detail`. Inside that directory, four files are created: * A CSS file for the component styles. * An HTML file for the component template. * A TypeScript file with a component class named `HeroDetailComponent`. * A test file for the `HeroDetailComponent` class. The command also adds the `HeroDetailComponent` as a declaration in the `@[NgModule](../../api/core/ngmodule)` decorator of the `src/app/app.module.ts` file. ### Write the template Cut the HTML for the hero detail from the bottom of the `HeroesComponent` template and paste it over the boilerplate content in the `HeroDetailComponent` template. The pasted HTML refers to a `selectedHero`. The new `HeroDetailComponent` can present *any* hero, not just a selected hero. Replace `selectedHero` with `hero` everywhere in the template. When you're done, the `HeroDetailComponent` template should look like this: ``` <div *ngIf="hero"> <h2>{{hero.name | uppercase}} Details</h2> <div><span>id: </span>{{hero.id}}</div> <div> <label for="hero-name">Hero name: </label> <input id="hero-name" [(ngModel)]="hero.name" placeholder="name"> </div> </div> ``` ### Add the `@[Input](../../api/core/input)()` hero property The `HeroDetailComponent` template binds to the component's `hero` property which is of type `Hero`. Open the `HeroDetailComponent` class file and import the `Hero` symbol. ``` import { Hero } from '../hero'; ``` The `hero` property [must be an `Input` property](../../guide/inputs-outputs "Input and Output properties"), annotated with the `@[Input](../../api/core/input)()` decorator, because the *external* `HeroesComponent` [binds to it](toh-pt3#heroes-component-template) like this. ``` <app-hero-detail [hero]="selectedHero"></app-hero-detail> ``` Amend the `@angular/core` import statement to include the `[Input](../../api/core/input)` symbol. ``` import { Component, Input } from '@angular/core'; ``` Add a `hero` property, preceded by the `@[Input](../../api/core/input)()` decorator. ``` @Input() hero?: Hero; ``` That's the only change you should make to the `HeroDetailComponent` class. There are no more properties. There's no presentation logic. This component only receives a hero object through its `hero` property and displays it. Show the `HeroDetailComponent` ------------------------------ The `HeroesComponent` used to display the hero details on its own, before you removed that part of the template. This section guides you through delegating logic to the `HeroDetailComponent`. The two components have a parent/child relationship. The parent, `HeroesComponent`, controls the child, `HeroDetailComponent` by sending it a new hero to display whenever the user selects a hero from the list. You don't need to change the `HeroesComponent` *class*, instead change its *template*. ### Update the `HeroesComponent` template The `HeroDetailComponent` selector is `'app-hero-detail'`. Add an `<app-hero-detail>` element near the bottom of the `HeroesComponent` template, where the hero detail view used to be. Bind the `HeroesComponent.selectedHero` to the element's `hero` property like this. ``` <app-hero-detail [hero]="selectedHero"></app-hero-detail> ``` `[hero]="selectedHero"` is an Angular [property binding](../../guide/property-binding). It's a *one-way* data binding from the `selectedHero` property of the `HeroesComponent` to the `hero` property of the target element, which maps to the `hero` property of the `HeroDetailComponent`. Now when the user clicks a hero in the list, the `selectedHero` changes. When the `selectedHero` changes, the *property binding* updates `hero` and the `HeroDetailComponent` displays the new hero. The revised `HeroesComponent` template should look like this: ``` <h2>My Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes"> <button [class.selected]="hero === selectedHero" type="button" (click)="onSelect(hero)"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> </ul> <app-hero-detail [hero]="selectedHero"></app-hero-detail> ``` The browser refreshes and the application starts working again as it did before. What changed? ------------- As [before](toh-pt2), whenever a user clicks on a hero name, the hero detail appears below the hero list. Now the `HeroDetailComponent` is presenting those details instead of the `HeroesComponent`. Refactoring the original `HeroesComponent` into two components yields benefits, both now and in the future: 1. You reduced the `HeroesComponent` responsibilities. 2. You can evolve the `HeroDetailComponent` into a rich hero editor without touching the parent `HeroesComponent`. 3. You can evolve the `HeroesComponent` without touching the hero detail view. 4. You can re-use the `HeroDetailComponent` in the template of some future component. Final code review ----------------- Here are the code files discussed on this page. ``` import { Component, Input } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'app-hero-detail', templateUrl: './hero-detail.component.html', styleUrls: ['./hero-detail.component.css'] }) export class HeroDetailComponent { @Input() hero?: Hero; } ``` ``` <div *ngIf="hero"> <h2>{{hero.name | uppercase}} Details</h2> <div><span>id: </span>{{hero.id}}</div> <div> <label for="hero-name">Hero name: </label> <input id="hero-name" [(ngModel)]="hero.name" placeholder="name"> </div> </div> ``` ``` <h2>My Heroes</h2> <ul class="heroes"> <li *ngFor="let hero of heroes"> <button [class.selected]="hero === selectedHero" type="button" (click)="onSelect(hero)"> <span class="badge">{{hero.id}}</span> <span class="name">{{hero.name}}</span> </button> </li> </ul> <app-hero-detail [hero]="selectedHero"></app-hero-detail> ``` ``` import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { HeroesComponent } from './heroes/heroes.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; @NgModule({ declarations: [ AppComponent, HeroesComponent, HeroDetailComponent ], imports: [ BrowserModule, FormsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` Summary ------- * You created a separate, reusable `HeroDetailComponent`. * You used a [property binding](../../guide/property-binding) to give the parent `HeroesComponent` control over the child `HeroDetailComponent`. * You used the [`@Input` decorator](../../guide/inputs-outputs) to make the `hero` property available for binding by the external `HeroesComponent`. angular Create a new project Create a new project ==================== Use the `ng new` command to start creating your **Tour of Heroes** application. This tutorial: 1. Sets up your environment. 2. Creates a new workspace and initial application project. 3. Serves the application. 4. Makes changes to the new application. > To view the application's code, see the live example. > > Set up your environment ----------------------- To set up your development environment, follow the instructions in [Local Environment Setup](../../guide/setup-local "Setting up for Local Development"). Create a new workspace and an initial application ------------------------------------------------- You develop applications in the context of an Angular [workspace](../../guide/glossary#workspace). A *workspace* contains the files for one or more [projects](../../guide/glossary#project). A *project* is the set of files that make up an application or a library. To create a new workspace and an initial project: 1. Ensure that you aren't already in an Angular workspace directory. For example, if you're in the Getting Started workspace from an earlier exercise, navigate to its parent. 2. Run `ng new` followed by the application name as shown here: ``` ng new angular-tour-of-heroes ``` 3. `ng new` prompts you for information about features to include in the initial project. Accept the defaults by pressing the Enter or Return key. `ng new` installs the necessary `npm` packages and other dependencies that Angular requires. This can take a few minutes. `ng new` also creates the following workspace and starter project files: * A new workspace, with a root directory named `angular-tour-of-heroes` * An initial skeleton application project in the `src/app` subdirectory * Related configuration files The initial application project contains a simple application that's ready to run. Serve the application --------------------- Go to the workspace directory and launch the application. ``` cd angular-tour-of-heroes ng serve --open ``` > The `ng serve` command: > > * Builds the application > * Starts the development server > * Watches the source files > * Rebuilds the application as you make changes > > The `--open` flag opens a browser to `http://localhost:4200`. > > You should see the application running in your browser. Angular components ------------------ The page you see is the *application shell*. The shell is controlled by an Angular **component** named `AppComponent`. *Components* are the fundamental building blocks of Angular applications. They display data on the screen, listen for user input, and take action based on that input. Make changes to the application ------------------------------- Open the project in your favorite editor or IDE. Navigate to the `src/app` directory to edit the starter application. In the IDE, locate these files, which make up the `AppComponent` that you just created: | Files | Details | | --- | --- | | `app.component.ts` | The component class code, written in TypeScript. | | `app.component.html` | The component template, written in HTML. | | `app.component.css` | The component's private CSS styles. | > When you ran `ng new`, Angular created test specifications for your new application. Unfortunately, making these changes breaks your newly created specifications. > > That won't be a problem because Angular testing is outside the scope of this tutorial and won't be used. > > To learn more about testing with Angular, see [Testing](../../guide/testing). > > ### Change the application title Open the `app.component.ts` and change the `title` property value to 'Tour of Heroes'. ``` title = 'Tour of Heroes'; ``` Open `app.component.html` and delete the default template that `ng new` created. Replace it with the following line of HTML. ``` <h1>{{title}}</h1> ``` The double curly braces are Angular's *interpolation binding* syntax. This interpolation binding presents the component's `title` property value inside the HTML header tag. The browser refreshes and displays the new application title. ### Add application styles Most apps strive for a consistent look across the application. `ng new` created an empty `styles.css` for this purpose. Put your application-wide styles there. Open `src/styles.css` and add the code below to the file. ``` /* Application-wide Styles */ h1 { color: #369; font-family: Arial, Helvetica, sans-serif; font-size: 250%; } h2, h3 { color: #444; font-family: Arial, Helvetica, sans-serif; font-weight: lighter; } body { margin: 2em; } body, input[type="text"], button { color: #333; font-family: Cambria, Georgia, serif; } button { background-color: #eee; border: none; border-radius: 4px; cursor: pointer; color: black; font-size: 1.2rem; padding: 1rem; margin-right: 1rem; margin-bottom: 1rem; margin-top: 1rem; } button:hover { background-color: black; color: white; } button:disabled { background-color: #eee; color: #aaa; cursor: auto; } /* everywhere else */ * { font-family: Arial, Helvetica, sans-serif; } ``` Final code review ----------------- Here are the code files discussed on this page. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Tour of Heroes'; } ``` ``` <h1>{{title}}</h1> ``` ``` /* Application-wide Styles */ h1 { color: #369; font-family: Arial, Helvetica, sans-serif; font-size: 250%; } h2, h3 { color: #444; font-family: Arial, Helvetica, sans-serif; font-weight: lighter; } body { margin: 2em; } body, input[type="text"], button { color: #333; font-family: Cambria, Georgia, serif; } button { background-color: #eee; border: none; border-radius: 4px; cursor: pointer; color: black; font-size: 1.2rem; padding: 1rem; margin-right: 1rem; margin-bottom: 1rem; margin-top: 1rem; } button:hover { background-color: black; color: white; } button:disabled { background-color: #eee; color: #aaa; cursor: auto; } /* everywhere else */ * { font-family: Arial, Helvetica, sans-serif; } ``` Summary ------- * You created the initial application structure using `ng new`. * You learned that Angular components display data * You used the double curly braces of interpolation to display the application title Last reviewed on Mon Feb 28 2022
programming_docs
angular The hero editor The hero editor =============== The application now has a basic title. Next, create a new component to display hero information and place that component in the application shell. > For the sample application that this page describes, see the live example. > > Create the heroes component --------------------------- Use `ng generate` to create a new component named `heroes`. ``` ng generate component heroes ``` `ng generate` creates a new directory , `src/app/heroes/`, and generates the three files of the `HeroesComponent` along with a test file. The `HeroesComponent` class file is as follows: ``` import { Component } from '@angular/core'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent { } ``` You always import the `[Component](../../api/core/component)` symbol from the Angular core library and annotate the component class with `@[Component](../../api/core/component)`. `@[Component](../../api/core/component)` is a decorator function that specifies the Angular metadata for the component. `ng generate` created three metadata properties: | Properties | Details | | --- | --- | | `selector` | The component's CSS element selector. | | `templateUrl` | The location of the component's template file. | | `styleUrls` | The location of the component's private CSS styles. | The [CSS element selector](https://developer.mozilla.org/docs/Web/CSS/Type_selectors), `'app-heroes'`, matches the name of the HTML element that identifies this component within a parent component's template. Always `export` the component class so you can `import` it elsewhere … like in the `AppModule`. ### Add a `hero` property Add a `hero` property to the `HeroesComponent` for a hero named, `Windstorm`. ``` hero = 'Windstorm'; ``` ### Show the hero Open the `heroes.component.html` template file. Delete the default text that `ng generate` created and replace it with a data binding to the new `hero` property. ``` <h2>{{hero}}</h2> ``` Show the `HeroesComponent` view ------------------------------- To display the `HeroesComponent`, you must add it to the template of the shell `AppComponent`. Remember that `app-heroes` is the [element selector](toh-pt1#selector) for the `HeroesComponent`. Add an `<app-heroes>` element to the `AppComponent` template file, just below the title. ``` <h1>{{title}}</h1> <app-heroes></app-heroes> ``` If `ng serve` is still running, the browser should refresh and display both the application title and the hero's name. Create a `Hero` interface ------------------------- A real hero is more than a name. Create a `Hero` interface in its own file in the `src/app` directory . Give it `id` and `name` properties. ``` export interface Hero { id: number; name: string; } ``` Return to the `HeroesComponent` class and import the `Hero` interface. Refactor the component's `hero` property to be of type `Hero`. Initialize it with an `id` of `1` and the name `Windstorm`. The revised `HeroesComponent` class file should look like this: ``` import { Component } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent { hero: Hero = { id: 1, name: 'Windstorm' }; } ``` The page no longer displays properly because you changed the hero from a string to an object. Show the hero object -------------------- Update the binding in the template to announce the hero's name and show both `id` and `name` in a details display like this: ``` <h2>{{hero.name}} Details</h2> <div><span>id: </span>{{hero.id}}</div> <div><span>name: </span>{{hero.name}}</div> ``` The browser refreshes and displays the hero's information. Format with the `UppercasePipe` ------------------------------- Edit the `hero.name` binding like this: ``` <h2>{{hero.name | uppercase}} Details</h2> ``` The browser refreshes and now the hero's name is displayed in capital letters. The word `[uppercase](../../api/common/uppercasepipe)` in the interpolation binding after the pipe `|` character, activates the built-in `UppercasePipe`. [Pipes](../../guide/pipes) are a good way to format strings, currency amounts, dates, and other display data. Angular ships with several built-in pipes and you can create your own. Edit the hero ------------- Users should be able to edit the hero's name in an `<input>` text box. The text box should both *display* the hero's `name` property and *update* that property as the user types. That means data flows from the component class *out to the screen* and from the screen *back to the class*. To automate that data flow, set up a two-way data binding between the `<input>` form element and the `hero.name` property. ### Two-way binding Refactor the details area in the `HeroesComponent` template so it looks like this: ``` <div> <label for="name">Hero name: </label> <input id="name" [(ngModel)]="hero.name" placeholder="name"> </div> ``` `[([ngModel](../../api/forms/ngmodel))]` is Angular's two-way data binding syntax. Here it binds the `hero.name` property to the HTML text box so that data can flow *in both directions*. Data can flow from the `hero.name` property to the text box and from the text box back to the `hero.name`. ### The missing `[FormsModule](../../api/forms/formsmodule)` Notice that the application stopped working when you added `[([ngModel](../../api/forms/ngmodel))]`. To see the error, open the browser development tools and look in the console for a message like ``` Template parse errors: Can't bind to 'ngModel' since it isn't a known property of 'input'. ``` Although `[ngModel](../../api/forms/ngmodel)` is a valid Angular directive, it isn't available by default. It belongs to the optional `[FormsModule](../../api/forms/formsmodule)` and you must *opt in* to using it. `AppModule` ----------- Angular needs to know how the pieces of your application fit together and what other files and libraries the application requires. This information is called *metadata*. Some of the metadata is in the `@[Component](../../api/core/component)` decorators that you added to your component classes. Other critical metadata is in [`@NgModule`](../../guide/ngmodules) decorators. The most important `@[NgModule](../../api/core/ngmodule)` decorator annotates the top-level **AppModule** class. `ng new` created an `AppModule` class in `src/app/app.module.ts` when it created the project. This is where you *opt in* to the `[FormsModule](../../api/forms/formsmodule)`. ### Import `[FormsModule](../../api/forms/formsmodule)` Open `app.module.ts` and import the `[FormsModule](../../api/forms/formsmodule)` symbol from the `@angular/forms` library. ``` import { FormsModule } from '@angular/forms'; // <-- NgModel lives here ``` Add `[FormsModule](../../api/forms/formsmodule)` to the `imports` array in `@[NgModule](../../api/core/ngmodule)`. The `imports` array contains the list of external modules that the application needs. ``` imports: [ BrowserModule, FormsModule ], ``` When the browser refreshes, the application should work again. You can edit the hero's name and see the changes reflected immediately in the `<h2>` above the text box. ### Declare `HeroesComponent` Every component must be declared in *exactly one* [NgModule](../../guide/ngmodules). *You* didn't declare the `HeroesComponent`. Why did the application work? It worked because the `ng generate` declared `HeroesComponent` in `AppModule` when it created that component. Open `src/app/app.module.ts` and find `HeroesComponent` imported near the top. ``` import { HeroesComponent } from './heroes/heroes.component'; ``` The `HeroesComponent` is declared in the `@[NgModule.declarations](../../api/core/ngmodule#declarations)` array. ``` declarations: [ AppComponent, HeroesComponent ], ``` > `AppModule` declares both application components, `AppComponent` and `HeroesComponent`. > > Final code review ----------------- Here are the code files discussed on this page. ``` import { Component } from '@angular/core'; import { Hero } from '../hero'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent { hero: Hero = { id: 1, name: 'Windstorm' }; } ``` ``` <h2>{{hero.name | uppercase}} Details</h2> <div><span>id: </span>{{hero.id}}</div> <div> <label for="name">Hero name: </label> <input id="name" [(ngModel)]="hero.name" placeholder="name"> </div> ``` ``` import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; // <-- NgModel lives here import { AppComponent } from './app.component'; import { HeroesComponent } from './heroes/heroes.component'; @NgModule({ declarations: [ AppComponent, HeroesComponent ], imports: [ BrowserModule, FormsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` ``` import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Tour of Heroes'; } ``` ``` <h1>{{title}}</h1> <app-heroes></app-heroes> ``` ``` export interface Hero { id: number; name: string; } ``` Summary ------- * You used `ng generate` to create a second `HeroesComponent`. * You displayed the `HeroesComponent` by adding it to the `AppComponent` shell. * You applied the `UppercasePipe` to format the name. * You used two-way data binding with the `[ngModel](../../api/forms/ngmodel)` directive. * You learned about the `AppModule`. * You imported the `[FormsModule](../../api/forms/formsmodule)` in the `AppModule` so that Angular would recognize and apply the `[ngModel](../../api/forms/ngmodel)` directive. * You learned the importance of declaring components in the `AppModule`. Last reviewed on Mon Feb 28 2022 angular Get data from a server Get data from a server ====================== This tutorial adds the following data persistence features with help from Angular's `[HttpClient](../../api/common/http/httpclient)`. * The `HeroService` gets hero data with HTTP requests * Users can add, edit, and delete heroes and save these changes over HTTP * Users can search for heroes by name > For the sample application that this page describes, see the live example. > > Enable HTTP services -------------------- `[HttpClient](../../api/common/http/httpclient)` is Angular's mechanism for communicating with a remote server over HTTP. Make `[HttpClient](../../api/common/http/httpclient)` available everywhere in the application in two steps. First, add it to the root `AppModule` by importing it: ``` import { HttpClientModule } from '@angular/common/http'; ``` Next, still in the `AppModule`, add `[HttpClientModule](../../api/common/http/httpclientmodule)` to the `imports` array: ``` @NgModule({ imports: [ HttpClientModule, ], }) ``` Simulate a data server ---------------------- This tutorial sample mimics communication with a remote data server by using the [In-memory Web API](https://github.com/angular/angular/tree/main/packages/misc/angular-in-memory-web-api "In-memory Web API") module. After installing the module, the application makes requests to and receive responses from the `[HttpClient](../../api/common/http/httpclient)`. The application doesn't know that the *In-memory Web API* is intercepting those requests, applying them to an in-memory data store, and returning simulated responses. By using the In-memory Web API, you won't have to set up a server to learn about `[HttpClient](../../api/common/http/httpclient)`. > **IMPORTANT**: The In-memory Web API module has nothing to do with HTTP in Angular. > > If you're reading this tutorial to learn about `[HttpClient](../../api/common/http/httpclient)`, you can [skip over](toh-pt6#import-heroes) this step. If you're coding along with this tutorial, stay here and add the In-memory Web API now. > > Install the In-memory Web API package from npm with the following command: ``` npm install angular-in-memory-web-api --save ``` In the `AppModule`, import the `HttpClientInMemoryWebApiModule` and the `InMemoryDataService` class, which you create next. ``` import { HttpClientInMemoryWebApiModule } from 'angular-in-memory-web-api'; import { InMemoryDataService } from './in-memory-data.service'; ``` After the `[HttpClientModule](../../api/common/http/httpclientmodule)`, add the `HttpClientInMemoryWebApiModule` to the `AppModule` `imports` array and configure it with the `InMemoryDataService`. ``` HttpClientModule, // The HttpClientInMemoryWebApiModule module intercepts HTTP requests // and returns simulated server responses. // Remove it when a real server is ready to receive requests. HttpClientInMemoryWebApiModule.forRoot( InMemoryDataService, { dataEncapsulation: false } ) ``` The `forRoot()` configuration method takes an `InMemoryDataService` class that primes the in-memory database. Generate the class `src/app/in-memory-data.service.ts` with the following command: ``` ng generate service InMemoryData ``` Replace the default contents of `in-memory-data.service.ts` with the following: ``` import { Injectable } from '@angular/core'; import { InMemoryDbService } from 'angular-in-memory-web-api'; import { Hero } from './hero'; @Injectable({ providedIn: 'root', }) export class InMemoryDataService implements InMemoryDbService { createDb() { const heroes = [ { id: 12, name: 'Dr. Nice' }, { id: 13, name: 'Bombasto' }, { id: 14, name: 'Celeritas' }, { id: 15, name: 'Magneta' }, { id: 16, name: 'RubberMan' }, { id: 17, name: 'Dynama' }, { id: 18, name: 'Dr. IQ' }, { id: 19, name: 'Magma' }, { id: 20, name: 'Tornado' } ]; return {heroes}; } // Overrides the genId method to ensure that a hero always has an id. // If the heroes array is empty, // the method below returns the initial number (11). // if the heroes array is not empty, the method below returns the highest // hero id + 1. genId(heroes: Hero[]): number { return heroes.length > 0 ? Math.max(...heroes.map(hero => hero.id)) + 1 : 11; } } ``` The `in-memory-data.service.ts` file takes over the function of `mock-heroes.ts`. Don't delete `mock-heroes.ts` yet. You still need it for a few more steps of this tutorial. After the server is ready, detach the In-memory Web API so the application's requests can go through to the server. Heroes and HTTP --------------- In the `HeroService`, import `[HttpClient](../../api/common/http/httpclient)` and `[HttpHeaders](../../api/common/http/httpheaders)`: ``` import { HttpClient, HttpHeaders } from '@angular/common/http'; ``` Still in the `HeroService`, inject `[HttpClient](../../api/common/http/httpclient)` into the constructor in a private property called `[http](../../api/common/http)`. ``` constructor( private http: HttpClient, private messageService: MessageService) { } ``` Notice that you keep injecting the `MessageService` but since your application calls it so frequently, wrap it in a private `log()` method: ``` /** Log a HeroService message with the MessageService */ private log(message: string) { this.messageService.add(`HeroService: ${message}`); } ``` Define the `heroesUrl` of the form `:base/:collectionName` with the address of the heroes resource on the server. Here `base` is the resource to which requests are made, and `collectionName` is the heroes data object in the `in-memory-data-service.ts`. ``` private heroesUrl = 'api/heroes'; // URL to web api ``` ### Get heroes with `[HttpClient](../../api/common/http/httpclient)` The current `HeroService.getHeroes()` uses the RxJS `of()` function to return an array of mock heroes as an `Observable<Hero[]>`. ``` getHeroes(): Observable<Hero[]> { const heroes = of(HEROES); return heroes; } ``` Convert that method to use `[HttpClient](../../api/common/http/httpclient)` as follows: ``` /** GET heroes from the server */ getHeroes(): Observable<Hero[]> { return this.http.get<Hero[]>(this.heroesUrl) } ``` Refresh the browser. The hero data should successfully load from the mock server. You've swapped `of()` for `http.get()` and the application keeps working without any other changes because both functions return an `Observable<Hero[]>`. ### `[HttpClient](../../api/common/http/httpclient)` methods return one value All `[HttpClient](../../api/common/http/httpclient)` methods return an RxJS `Observable` of something. HTTP is a request/response protocol. You make a request, it returns a single response. In general, an observable *can* return more than one value over time. An observable from `[HttpClient](../../api/common/http/httpclient)` always emits a single value and then completes, never to emit again. This particular call to `[HttpClient.get()](../../api/common/http/httpclient#get)` returns an `Observable<Hero[]>`, which is *an observable of hero arrays*. In practice, it only returns a single hero array. ### `[HttpClient.get()](../../api/common/http/httpclient#get)` returns response data `[HttpClient.get()](../../api/common/http/httpclient#get)` returns the body of the response as an untyped JSON object by default. Applying the optional type specifier, `<Hero[]>` , adds TypeScript capabilities, which reduce errors during compile time. The server's data API determines the shape of the JSON data. The *Tour of Heroes* data API returns the hero data as an array. > Other APIs may bury the data that you want within an object. You might have to dig that data out by processing the `Observable` result with the RxJS `map()` operator. > > Although not discussed here, there's an example of `map()` in the `getHeroNo404()` method included in the sample source code. > > ### Error handling Things go wrong, especially when you're getting data from a remote server. The `HeroService.getHeroes()` method should catch errors and do something appropriate. To catch errors, you **"pipe" the observable** result from `http.get()` through an RxJS `catchError()` operator. Import the `catchError` symbol from `rxjs/operators`, along with some other operators to use later. ``` import { catchError, map, tap } from 'rxjs/operators'; ``` Now extend the observable result with the `pipe()` method and give it a `catchError()` operator. ``` getHeroes(): Observable<Hero[]> { return this.http.get<Hero[]>(this.heroesUrl) .pipe( catchError(this.handleError<Hero[]>('getHeroes', [])) ); } ``` The `catchError()` operator intercepts an **`Observable` that failed**. The operator then passes the error to the error handling function. The following `handleError()` method reports the error and then returns an innocuous result so that the application keeps working. #### `handleError` The following `handleError()` can be shared by many `HeroService` methods so it's generalized to meet their different needs. Instead of handling the error directly, it returns an error handler function to `catchError`. This function is configured with both the name of the operation that failed and a safe return value. ``` /** * Handle Http operation that failed. * Let the app continue. * * @param operation - name of the operation that failed * @param result - optional value to return as the observable result */ private handleError<T>(operation = 'operation', result?: T) { return (error: any): Observable<T> => { // TODO: send the error to remote logging infrastructure console.error(error); // log to console instead // TODO: better job of transforming error for user consumption this.log(`${operation} failed: ${error.message}`); // Let the app keep running by returning an empty result. return of(result as T); }; } ``` After reporting the error to the console, the handler constructs a friendly message and returns a safe value so the application can keep working. Because each service method returns a different kind of `Observable` result, `handleError()` takes a type parameter to return the safe value as the type that the application expects. ### Tap into the Observable The `HeroService` methods taps into the flow of observable values and send a message, using the `log()` method, to the message area at the bottom of the page. The RxJS `tap()` operator enables this ability by looking at the observable values, doing something with those values, and passing them along. The `tap()` call back doesn't access the values themselves. Here is the final version of `getHeroes()` with the `tap()` that logs the operation. ``` /** GET heroes from the server */ getHeroes(): Observable<Hero[]> { return this.http.get<Hero[]>(this.heroesUrl) .pipe( tap(_ => this.log('fetched heroes')), catchError(this.handleError<Hero[]>('getHeroes', [])) ); } ``` ### Get hero by id Most web APIs support a *get by id* request in the form `:baseURL/:id`. Here, the *base URL* is the `heroesURL` defined in the [Heroes and HTTP](toh-pt6#heroes-and-http) section in `api/heroes` and *id* is the number of the hero that you want to retrieve. For example, `api/heroes/11`. Update the `HeroService` `getHero()` method with the following to make that request: ``` /** GET hero by id. Will 404 if id not found */ getHero(id: number): Observable<Hero> { const url = `${this.heroesUrl}/${id}`; return this.http.get<Hero>(url).pipe( tap(_ => this.log(`fetched hero id=${id}`)), catchError(this.handleError<Hero>(`getHero id=${id}`)) ); } ``` `getHero()` has three significant differences from `getHeroes()`: * `getHero()` constructs a request URL with the desired hero's id * The server should respond with a single hero rather than an array of heroes * `getHero()` returns an `Observable<Hero>`, which is an observable of `Hero` *objects* rather than an observable of `Hero` *arrays*. Update heroes ------------- Edit a hero's name in the hero detail view. As you type, the hero name updates the heading at the top of the page, yet when you click **Go back**, your changes are lost. If you want changes to persist, you must write them back to the server. At the end of the hero detail template, add a save button with a `click` event binding that invokes a new component method named `save()`. ``` <button type="button" (click)="save()">save</button> ``` In the `HeroDetail` component class, add the following `save()` method, which persists hero name changes using the hero service `updateHero()` method and then navigates back to the previous view. ``` save(): void { if (this.hero) { this.heroService.updateHero(this.hero) .subscribe(() => this.goBack()); } } ``` #### Add `HeroService.updateHero()` The structure of the `updateHero()` method is like that of `getHeroes()`, but it uses `http.put()` to persist the changed hero on the server. Add the following to the `HeroService`. ``` /** PUT: update the hero on the server */ updateHero(hero: Hero): Observable<any> { return this.http.put(this.heroesUrl, hero, this.httpOptions).pipe( tap(_ => this.log(`updated hero id=${hero.id}`)), catchError(this.handleError<any>('updateHero')) ); } ``` The `[HttpClient.put()](../../api/common/http/httpclient#put)` method takes three parameters: * The URL * The data to update, which is the modified hero in this case * Options The URL is unchanged. The heroes web API knows which hero to update by looking at the hero's `id`. The heroes web API expects a special header in HTTP save requests. That header is in the `httpOptions` constant defined in the `HeroService`. Add the following to the `HeroService` class. ``` httpOptions = { headers: new HttpHeaders({ 'Content-Type': 'application/json' }) }; ``` Refresh the browser, change a hero name and save your change. The `save()` method in `HeroDetailComponent` navigates to the previous view. The hero now appears in the list with the changed name. Add a new hero -------------- To add a hero, this application only needs the hero's name. You can use an `<input>` element paired with an add button. Insert the following into the `HeroesComponent` template, after the heading: ``` <div> <label for="new-hero">Hero name: </label> <input id="new-hero" #heroName /> <!-- (click) passes input value to add() and then clears the input --> <button type="button" class="add-button" (click)="add(heroName.value); heroName.value=''"> Add hero </button> </div> ``` In response to a click event, call the component's click handler, `add()`, and then clear the input field so that it's ready for another name. Add the following to the `HeroesComponent` class: ``` add(name: string): void { name = name.trim(); if (!name) { return; } this.heroService.addHero({ name } as Hero) .subscribe(hero => { this.heroes.push(hero); }); } ``` When the given name isn't blank, the handler creates an object based on the hero's name. The handler passes the object name to the service's `addHero()` method. When `addHero()` creates a new object, the `subscribe()` callback receives the new hero and pushes it into to the `heroes` list for display. Add the following `addHero()` method to the `HeroService` class. ``` /** POST: add a new hero to the server */ addHero(hero: Hero): Observable<Hero> { return this.http.post<Hero>(this.heroesUrl, hero, this.httpOptions).pipe( tap((newHero: Hero) => this.log(`added hero w/ id=${newHero.id}`)), catchError(this.handleError<Hero>('addHero')) ); } ``` `addHero()` differs from `updateHero()` in two ways: * It calls `[HttpClient.post()](../../api/common/http/httpclient#post)` instead of `put()` * It expects the server to create an id for the new hero, which it returns in the `Observable<Hero>` to the caller Refresh the browser and add some heroes. Delete a hero ------------- Each hero in the heroes list should have a delete button. Add the following button element to the `HeroesComponent` template, after the hero name in the repeated `<li>` element. ``` <button type="button" class="delete" title="delete hero" (click)="delete(hero)">x</button> ``` The HTML for the list of heroes should look like this: ``` <ul class="heroes"> <li *ngFor="let hero of heroes"> <a routerLink="/detail/{{hero.id}}"> <span class="badge">{{hero.id}}</span> {{hero.name}} </a> <button type="button" class="delete" title="delete hero" (click)="delete(hero)">x</button> </li> </ul> ``` To position the delete button at the far right of the hero entry, add some CSS from the [final review code](toh-pt6#heroescomponent) to the `heroes.component.css`. Add the `delete()` handler to the component class. ``` delete(hero: Hero): void { this.heroes = this.heroes.filter(h => h !== hero); this.heroService.deleteHero(hero.id).subscribe(); } ``` Although the component delegates hero deletion to the `HeroService`, it remains responsible for updating its own list of heroes. The component's `delete()` method immediately removes the *hero-to-delete* from that list, anticipating that the `HeroService` succeeds on the server. There's really nothing for the component to do with the `Observable` returned by `heroService.deleteHero()` **but it must subscribe anyway**. Next, add a `deleteHero()` method to `HeroService` like this. ``` /** DELETE: delete the hero from the server */ deleteHero(id: number): Observable<Hero> { const url = `${this.heroesUrl}/${id}`; return this.http.delete<Hero>(url, this.httpOptions).pipe( tap(_ => this.log(`deleted hero id=${id}`)), catchError(this.handleError<Hero>('deleteHero')) ); } ``` Notice the following key points: * `deleteHero()` calls `[HttpClient.delete()](../../api/common/http/httpclient#delete)` * The URL is the heroes resource URL plus the `id` of the hero to delete * You don't send data as you did with `put()` and `post()` * You still send the `httpOptions` Refresh the browser and try the new delete capability. > If you neglect to `subscribe()`, the service can't send the delete request to the server. As a rule, an `Observable` *does nothing* until something subscribes. > > Confirm this for yourself by temporarily removing the `subscribe()`, clicking **Dashboard**, then clicking **Heroes**. This shows the full list of heroes again. > > Search by name -------------- In this last exercise, you learn to chain `Observable` operators together so you can reduce the number of similar HTTP requests to consume network bandwidth economically. ### Add a heroes search feature to the Dashboard As the user types a name into a search box, your application makes repeated HTTP requests for heroes filtered by that name. Your goal is to issue only as many requests as necessary. #### `HeroService.searchHeroes()` Start by adding a `searchHeroes()` method to the `HeroService`. ``` /* GET heroes whose name contains search term */ searchHeroes(term: string): Observable<Hero[]> { if (!term.trim()) { // if not search term, return empty hero array. return of([]); } return this.http.get<Hero[]>(`${this.heroesUrl}/?name=${term}`).pipe( tap(x => x.length ? this.log(`found heroes matching "${term}"`) : this.log(`no heroes matching "${term}"`)), catchError(this.handleError<Hero[]>('searchHeroes', [])) ); } ``` The method returns immediately with an empty array if there is no search term. The rest of it closely resembles `getHeroes()`, the only significant difference being the URL, which includes a query string with the search term. ### Add search to the dashboard Open the `DashboardComponent` template and add the hero search element, `<app-hero-search>`, to the bottom of the markup. ``` <h2>Top Heroes</h2> <div class="heroes-menu"> <a *ngFor="let hero of heroes" routerLink="/detail/{{hero.id}}"> {{hero.name}} </a> </div> <app-hero-search></app-hero-search> ``` This template looks a lot like the `*[ngFor](../../api/common/ngfor)` repeater in the `HeroesComponent` template. For this to work, the next step is to add a component with a selector that matches `<app-hero-search>`. ### Create `HeroSearchComponent` Run `ng generate` to create a `HeroSearchComponent`. ``` ng generate component hero-search ``` `ng generate` creates the three `HeroSearchComponent` files and adds the component to the `AppModule` declarations. Replace the `HeroSearchComponent` template with an `<input>` and a list of matching search results, as follows. ``` <div id="search-component"> <label for="search-box">Hero Search</label> <input #searchBox id="search-box" (input)="search(searchBox.value)" /> <ul class="search-result"> <li *ngFor="let hero of heroes$ | async" > <a routerLink="/detail/{{hero.id}}"> {{hero.name}} </a> </li> </ul> </div> ``` Add private CSS styles to `hero-search.component.css` as listed in the [final code review](toh-pt6#herosearchcomponent) below. As the user types in the search box, an input event binding calls the component's `search()` method with the new search box value. ### `[AsyncPipe](../../api/common/asyncpipe)` The `*[ngFor](../../api/common/ngfor)` repeats hero objects. Notice that the `*[ngFor](../../api/common/ngfor)` iterates over a list called `heroes$`, not `heroes`. The `$` is a convention that indicates `heroes$` is an `Observable`, not an array. ``` <li *ngFor="let hero of heroes$ | async" > ``` Since `*[ngFor](../../api/common/ngfor)` can't do anything with an `Observable`, use the pipe `|` character followed by `[async](../../api/common/asyncpipe)`. This identifies Angular's `[AsyncPipe](../../api/common/asyncpipe)` and subscribes to an `Observable` automatically so you won't have to do so in the component class. ### Edit the `HeroSearchComponent` class Replace the `HeroSearchComponent` class and metadata as follows. ``` import { Component, OnInit } from '@angular/core'; import { Observable, Subject } from 'rxjs'; import { debounceTime, distinctUntilChanged, switchMap } from 'rxjs/operators'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-hero-search', templateUrl: './hero-search.component.html', styleUrls: [ './hero-search.component.css' ] }) export class HeroSearchComponent implements OnInit { heroes$!: Observable<Hero[]>; private searchTerms = new Subject<string>(); constructor(private heroService: HeroService) {} // Push a search term into the observable stream. search(term: string): void { this.searchTerms.next(term); } ngOnInit(): void { this.heroes$ = this.searchTerms.pipe( // wait 300ms after each keystroke before considering the term debounceTime(300), // ignore new term if same as previous term distinctUntilChanged(), // switch to new search observable each time the term changes switchMap((term: string) => this.heroService.searchHeroes(term)), ); } } ``` Notice the declaration of `heroes$` as an `Observable`: ``` heroes$!: Observable<Hero[]>; ``` Set this in [`ngOnInit()`](toh-pt6#search-pipe). Before you do, focus on the definition of `searchTerms`. ### The `searchTerms` RxJS subject The `searchTerms` property is an RxJS `Subject`. ``` private searchTerms = new Subject<string>(); // Push a search term into the observable stream. search(term: string): void { this.searchTerms.next(term); } ``` A `Subject` is both a source of observable values and an `Observable` itself. You can subscribe to a `Subject` as you would any `Observable`. You can also push values into that `Observable` by calling its `next(value)` method as the `search()` method does. The event binding to the text box's `input` event calls the `search()` method. ``` <input #searchBox id="search-box" (input)="search(searchBox.value)" /> ``` Every time the user types in the text box, the binding calls `search()` with the text box value as a *search term*. The `searchTerms` becomes an `Observable` emitting a steady stream of search terms. ### Chaining RxJS operators Passing a new search term directly to the `searchHeroes()` after every user keystroke creates excessive HTTP requests, which taxes server resources and burning through data plans. Instead, the `ngOnInit()` method pipes the `searchTerms` observable through a sequence of RxJS operators that reduce the number of calls to the `searchHeroes()`. Ultimately, this returns an observable of timely hero search results where each one is a `Hero[]`. Here's a closer look at the code. ``` this.heroes$ = this.searchTerms.pipe( // wait 300ms after each keystroke before considering the term debounceTime(300), // ignore new term if same as previous term distinctUntilChanged(), // switch to new search observable each time the term changes switchMap((term: string) => this.heroService.searchHeroes(term)), ); ``` Each operator works as follows: * `debounceTime(300)` waits until the flow of new string events pauses for 300 milliseconds before passing along the latest string. Requests aren't likely to happen more frequently than 300 ms. * `distinctUntilChanged()` ensures that a request is sent only if the filter text changed. * `switchMap()` calls the search service for each search term that makes it through `debounce()` and `distinctUntilChanged()`. It cancels and discards previous search observables, returning only the latest search service observable. > With the [`switchMap` operator](https://www.learnrxjs.io/learn-rxjs/operators/transformation/switchmap), every qualifying key event can trigger an `[HttpClient.get()](../../api/common/http/httpclient#get)` method call. Even with a 300 ms pause between requests, you could have many HTTP requests in flight and they may not return in the order sent. > > `switchMap()` preserves the original request order while returning only the observable from the most recent HTTP method call. Results from prior calls are canceled and discarded. > > > > Canceling a previous `searchHeroes()` Observable doesn't actually cancel a pending HTTP request. Unwanted results are discarded before they reach your application code. > > > > > > Remember that the component *class* doesn't subscribe to the `heroes$` *observable*. That's the job of the [`AsyncPipe`](toh-pt6#asyncpipe) in the template. #### Try it Run the application again. In the *Dashboard*, enter some text in the search box. Enter characters that match any existing hero names, and look for something like this. Final code review ----------------- Here are the code files discussed on this page. They're found in the `src/app/` directory. ### `HeroService`, `InMemoryDataService`, `AppModule` ``` import { Injectable } from '@angular/core'; import { HttpClient, HttpHeaders } from '@angular/common/http'; import { Observable, of } from 'rxjs'; import { catchError, map, tap } from 'rxjs/operators'; import { Hero } from './hero'; import { MessageService } from './message.service'; @Injectable({ providedIn: 'root' }) export class HeroService { private heroesUrl = 'api/heroes'; // URL to web api httpOptions = { headers: new HttpHeaders({ 'Content-Type': 'application/json' }) }; constructor( private http: HttpClient, private messageService: MessageService) { } /** GET heroes from the server */ getHeroes(): Observable<Hero[]> { return this.http.get<Hero[]>(this.heroesUrl) .pipe( tap(_ => this.log('fetched heroes')), catchError(this.handleError<Hero[]>('getHeroes', [])) ); } /** GET hero by id. Return `undefined` when id not found */ getHeroNo404<Data>(id: number): Observable<Hero> { const url = `${this.heroesUrl}/?id=${id}`; return this.http.get<Hero[]>(url) .pipe( map(heroes => heroes[0]), // returns a {0|1} element array tap(h => { const outcome = h ? 'fetched' : 'did not find'; this.log(`${outcome} hero id=${id}`); }), catchError(this.handleError<Hero>(`getHero id=${id}`)) ); } /** GET hero by id. Will 404 if id not found */ getHero(id: number): Observable<Hero> { const url = `${this.heroesUrl}/${id}`; return this.http.get<Hero>(url).pipe( tap(_ => this.log(`fetched hero id=${id}`)), catchError(this.handleError<Hero>(`getHero id=${id}`)) ); } /* GET heroes whose name contains search term */ searchHeroes(term: string): Observable<Hero[]> { if (!term.trim()) { // if not search term, return empty hero array. return of([]); } return this.http.get<Hero[]>(`${this.heroesUrl}/?name=${term}`).pipe( tap(x => x.length ? this.log(`found heroes matching "${term}"`) : this.log(`no heroes matching "${term}"`)), catchError(this.handleError<Hero[]>('searchHeroes', [])) ); } //////// Save methods ////////// /** POST: add a new hero to the server */ addHero(hero: Hero): Observable<Hero> { return this.http.post<Hero>(this.heroesUrl, hero, this.httpOptions).pipe( tap((newHero: Hero) => this.log(`added hero w/ id=${newHero.id}`)), catchError(this.handleError<Hero>('addHero')) ); } /** DELETE: delete the hero from the server */ deleteHero(id: number): Observable<Hero> { const url = `${this.heroesUrl}/${id}`; return this.http.delete<Hero>(url, this.httpOptions).pipe( tap(_ => this.log(`deleted hero id=${id}`)), catchError(this.handleError<Hero>('deleteHero')) ); } /** PUT: update the hero on the server */ updateHero(hero: Hero): Observable<any> { return this.http.put(this.heroesUrl, hero, this.httpOptions).pipe( tap(_ => this.log(`updated hero id=${hero.id}`)), catchError(this.handleError<any>('updateHero')) ); } /** * Handle Http operation that failed. * Let the app continue. * * @param operation - name of the operation that failed * @param result - optional value to return as the observable result */ private handleError<T>(operation = 'operation', result?: T) { return (error: any): Observable<T> => { // TODO: send the error to remote logging infrastructure console.error(error); // log to console instead // TODO: better job of transforming error for user consumption this.log(`${operation} failed: ${error.message}`); // Let the app keep running by returning an empty result. return of(result as T); }; } /** Log a HeroService message with the MessageService */ private log(message: string) { this.messageService.add(`HeroService: ${message}`); } } ``` ``` import { Injectable } from '@angular/core'; import { InMemoryDbService } from 'angular-in-memory-web-api'; import { Hero } from './hero'; @Injectable({ providedIn: 'root', }) export class InMemoryDataService implements InMemoryDbService { createDb() { const heroes = [ { id: 12, name: 'Dr. Nice' }, { id: 13, name: 'Bombasto' }, { id: 14, name: 'Celeritas' }, { id: 15, name: 'Magneta' }, { id: 16, name: 'RubberMan' }, { id: 17, name: 'Dynama' }, { id: 18, name: 'Dr. IQ' }, { id: 19, name: 'Magma' }, { id: 20, name: 'Tornado' } ]; return {heroes}; } // Overrides the genId method to ensure that a hero always has an id. // If the heroes array is empty, // the method below returns the initial number (11). // if the heroes array is not empty, the method below returns the highest // hero id + 1. genId(heroes: Hero[]): number { return heroes.length > 0 ? Math.max(...heroes.map(hero => hero.id)) + 1 : 11; } } ``` ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { HttpClientModule } from '@angular/common/http'; import { HttpClientInMemoryWebApiModule } from 'angular-in-memory-web-api'; import { InMemoryDataService } from './in-memory-data.service'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { DashboardComponent } from './dashboard/dashboard.component'; import { HeroDetailComponent } from './hero-detail/hero-detail.component'; import { HeroesComponent } from './heroes/heroes.component'; import { HeroSearchComponent } from './hero-search/hero-search.component'; import { MessagesComponent } from './messages/messages.component'; @NgModule({ imports: [ BrowserModule, FormsModule, AppRoutingModule, HttpClientModule, // The HttpClientInMemoryWebApiModule module intercepts HTTP requests // and returns simulated server responses. // Remove it when a real server is ready to receive requests. HttpClientInMemoryWebApiModule.forRoot( InMemoryDataService, { dataEncapsulation: false } ) ], declarations: [ AppComponent, DashboardComponent, HeroesComponent, HeroDetailComponent, MessagesComponent, HeroSearchComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ### `HeroesComponent` ``` <h2>My Heroes</h2> <div> <label for="new-hero">Hero name: </label> <input id="new-hero" #heroName /> <!-- (click) passes input value to add() and then clears the input --> <button type="button" class="add-button" (click)="add(heroName.value); heroName.value=''"> Add hero </button> </div> <ul class="heroes"> <li *ngFor="let hero of heroes"> <a routerLink="/detail/{{hero.id}}"> <span class="badge">{{hero.id}}</span> {{hero.name}} </a> <button type="button" class="delete" title="delete hero" (click)="delete(hero)">x</button> </li> </ul> ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-heroes', templateUrl: './heroes.component.html', styleUrls: ['./heroes.component.css'] }) export class HeroesComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) { } ngOnInit(): void { this.getHeroes(); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes); } add(name: string): void { name = name.trim(); if (!name) { return; } this.heroService.addHero({ name } as Hero) .subscribe(hero => { this.heroes.push(hero); }); } delete(hero: Hero): void { this.heroes = this.heroes.filter(h => h !== hero); this.heroService.deleteHero(hero.id).subscribe(); } } ``` ``` /* HeroesComponent's private CSS styles */ .heroes { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } input { display: block; width: 100%; padding: .5rem; margin: 1rem 0; box-sizing: border-box; } .heroes li { position: relative; cursor: pointer; } .heroes li:hover { left: .1em; } .heroes a { color: #333; text-decoration: none; background-color: #EEE; margin: .5em; padding: .3em 0; height: 1.6em; border-radius: 4px; display: block; width: 100%; } .heroes a:hover { color: #2c3a41; background-color: #e6e6e6; } .heroes a:active { background-color: #525252; color: #fafafa; } .heroes .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #405061; line-height: 1em; position: relative; left: -1px; top: -4px; height: 1.8em; min-width: 16px; text-align: right; margin-right: .8em; border-radius: 4px 0 0 4px; } .add-button { padding: .5rem 1.5rem; font-size: 1rem; margin-bottom: 2rem; } .add-button:hover { color: white; background-color: #42545C; } button.delete { position: absolute; left: 210px; top: 5px; background-color: white; color: #525252; font-size: 1.1rem; margin: 0; padding: 1px 10px 3px 10px; } button.delete:hover { background-color: #525252; color: white; } ``` ### `HeroDetailComponent` ``` <div *ngIf="hero"> <h2>{{hero.name | uppercase}} Details</h2> <div><span>id: </span>{{hero.id}}</div> <div> <label for="hero-name">Hero name: </label> <input id="hero-name" [(ngModel)]="hero.name" placeholder="Hero name"/> </div> <button type="button" (click)="goBack()">go back</button> <button type="button" (click)="save()">save</button> </div> ``` ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { Location } from '@angular/common'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-hero-detail', templateUrl: './hero-detail.component.html', styleUrls: [ './hero-detail.component.css' ] }) export class HeroDetailComponent implements OnInit { hero: Hero | undefined; constructor( private route: ActivatedRoute, private heroService: HeroService, private location: Location ) {} ngOnInit(): void { this.getHero(); } getHero(): void { const id = parseInt(this.route.snapshot.paramMap.get('id')!, 10); this.heroService.getHero(id) .subscribe(hero => this.hero = hero); } goBack(): void { this.location.back(); } save(): void { if (this.hero) { this.heroService.updateHero(this.hero) .subscribe(() => this.goBack()); } } } ``` ``` /* HeroDetailComponent's private CSS styles */ label { color: #435960; font-weight: bold; } input { font-size: 1em; padding: .5rem; } button { margin-top: 20px; margin-right: .5rem; background-color: #eee; padding: 1rem; border-radius: 4px; font-size: 1rem; } button:hover { background-color: #cfd8dc; } button:disabled { background-color: #eee; color: #ccc; cursor: auto; } ``` ### `DashboardComponent` ``` <h2>Top Heroes</h2> <div class="heroes-menu"> <a *ngFor="let hero of heroes" routerLink="/detail/{{hero.id}}"> {{hero.name}} </a> </div> <app-hero-search></app-hero-search> ``` ``` import { Component, OnInit } from '@angular/core'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-dashboard', templateUrl: './dashboard.component.html', styleUrls: [ './dashboard.component.css' ] }) export class DashboardComponent implements OnInit { heroes: Hero[] = []; constructor(private heroService: HeroService) { } ngOnInit(): void { this.getHeroes(); } getHeroes(): void { this.heroService.getHeroes() .subscribe(heroes => this.heroes = heroes.slice(1, 5)); } } ``` ``` /* DashboardComponent's private CSS styles */ h2 { text-align: center; } .heroes-menu { padding: 0; margin: auto; max-width: 1000px; /* flexbox */ display: -webkit-box; display: -moz-box; display: -ms-flexbox; display: -webkit-flex; display: flex; flex-direction: row; flex-wrap: wrap; justify-content: space-around; align-content: flex-start; align-items: flex-start; } a { background-color: #3f525c; border-radius: 2px; padding: 1rem; font-size: 1.2rem; text-decoration: none; display: inline-block; color: #fff; text-align: center; width: 100%; min-width: 70px; margin: .5rem auto; box-sizing: border-box; /* flexbox */ order: 0; flex: 0 1 auto; align-self: auto; } @media (min-width: 600px) { a { width: 18%; box-sizing: content-box; } } a:hover { background-color: black; } ``` ### `HeroSearchComponent` ``` <div id="search-component"> <label for="search-box">Hero Search</label> <input #searchBox id="search-box" (input)="search(searchBox.value)" /> <ul class="search-result"> <li *ngFor="let hero of heroes$ | async" > <a routerLink="/detail/{{hero.id}}"> {{hero.name}} </a> </li> </ul> </div> ``` ``` import { Component, OnInit } from '@angular/core'; import { Observable, Subject } from 'rxjs'; import { debounceTime, distinctUntilChanged, switchMap } from 'rxjs/operators'; import { Hero } from '../hero'; import { HeroService } from '../hero.service'; @Component({ selector: 'app-hero-search', templateUrl: './hero-search.component.html', styleUrls: [ './hero-search.component.css' ] }) export class HeroSearchComponent implements OnInit { heroes$!: Observable<Hero[]>; private searchTerms = new Subject<string>(); constructor(private heroService: HeroService) {} // Push a search term into the observable stream. search(term: string): void { this.searchTerms.next(term); } ngOnInit(): void { this.heroes$ = this.searchTerms.pipe( // wait 300ms after each keystroke before considering the term debounceTime(300), // ignore new term if same as previous term distinctUntilChanged(), // switch to new search observable each time the term changes switchMap((term: string) => this.heroService.searchHeroes(term)), ); } } ``` ``` /* HeroSearch private styles */ label { display: block; font-weight: bold; font-size: 1.2rem; margin-top: 1rem; margin-bottom: .5rem; } input { padding: .5rem; width: 100%; max-width: 600px; box-sizing: border-box; display: block; } input:focus { outline: #336699 auto 1px; } li { list-style-type: none; } .search-result li a { border-bottom: 1px solid gray; border-left: 1px solid gray; border-right: 1px solid gray; display: inline-block; width: 100%; max-width: 600px; padding: .5rem; box-sizing: border-box; text-decoration: none; color: black; } .search-result li a:hover { background-color: #435A60; color: white; } ul.search-result { margin-top: 0; padding-left: 0; } ``` Summary ------- You're at the end of your journey, and you've accomplished a lot. * You added the necessary dependencies to use HTTP in the application * You refactored `HeroService` to load heroes from a web API * You extended `HeroService` to support `post()`, `put()`, and `delete()` methods * You updated the components to allow adding, editing, and deleting of heroes * You configured an in-memory web API * You learned how to use observables This concludes the "Tour of Heroes" tutorial. You're ready to learn more about Angular development in the fundamentals section, starting with the [Architecture](../../guide/architecture "Architecture") guide. Last reviewed on Mon Feb 28 2022
programming_docs
angular Deploying an application Deploying an application ======================== Deploying your application is the process of compiling, or building, your code and hosting the JavaScript, CSS, and HTML on a web server. This section builds on the previous steps in the [Getting Started](https://angular.io/start/start "Try it: A basic application") tutorial and shows you how to deploy your application. Prerequisites ------------- A best practice is to run your project locally before you deploy it. To run your project locally, you need the following installed on your computer: * [Node.js](https://nodejs.org/en). * The [Angular CLI](https://cli.angular.io). From the terminal, install the Angular CLI globally with: ``` npm install -g @angular/cli ``` With the Angular CLI, you can use the command `ng` to create new workspaces, new projects, serve your application during development, or produce builds to share or distribute. Running your application locally -------------------------------- 1. Download the source code from your StackBlitz project by clicking the `Download Project` icon in the left menu, across from `Project`, to download your project as a zip archive. 2. Unzip the archive and change directory to the newly created project. For example: ``` cd angular-ynqttp ``` 3. To download and install npm packages, use the following npm CLI command: ``` npm install ``` 4. Use the following CLI command to run your application locally: ``` ng serve ``` 5. To see your application in the browser, go to http://localhost:4200/. If the default port 4200 is not available, you can specify another port with the port flag as in the following example: ``` ng serve --port 4201 ``` While serving your application, you can edit your code and see the changes update automatically in the browser. To stop the `ng serve` command, press `Ctrl`+`c`. Building and hosting your application ------------------------------------- 1. To build your application for production, use the `build` command. By default, this command uses the `production` build configuration. ``` ng build ``` This command creates a `dist` folder in the application root directory with all the files that a hosting service needs for serving your application. > If the above `ng build` command throws an error about missing packages, append the missing dependencies in your local project's `package.json` file to match the one in the downloaded StackBlitz project. > > 2. Copy the contents of the `dist/my-project-name` folder to your web server. Because these files are static, you can host them on any web server capable of serving files; such as `Node.js`, Java, .NET, or any backend such as [Firebase](https://firebase.google.com/docs/hosting), [Google Cloud](https://cloud.google.com/solutions/web-hosting), or [App Engine](https://cloud.google.com/appengine/docs/standard/python/getting-started/hosting-a-static-website). For more information, see [Building & Serving](../guide/build "Building and Serving Angular Apps") and [Deployment](../guide/deployment "Deployment guide"). What's next ----------- In this tutorial, you've laid the foundation to explore the Angular world in areas such as mobile development, UX/UI development, and server-side rendering. You can go deeper by studying more of Angular's features, engaging with the vibrant community, and exploring the robust ecosystem. ### Learning more Angular For a more in-depth tutorial that leads you through building an application locally and exploring many of Angular's most popular features, see [Tour of Heroes](https://angular.io/start/tutorial). To explore Angular's foundational concepts, see the guides in the Understanding Angular section such as [Angular Components Overview](../guide/component-overview) or [Template syntax](../guide/template-syntax). ### Joining the community [Tweet that you've completed this tutorial](https://twitter.com/intent/tweet?url=https://angular.io/start&text=I%20just%20finished%20the%20Angular%20Getting%20Started%20Tutorial "Angular on Twitter"), tell us what you think, or submit [suggestions for future editions](https://github.com/angular/angular/issues/new/choose "Angular GitHub repository new issue form"). Keep current by following the [Angular blog](https://blog.angular.io/ "Angular blog"). ### Exploring the Angular ecosystem To support your UX/UI development, see [Angular Material](https://material.angular.io/ "Angular Material web site"). The Angular community also has an extensive [network of third-party tools and libraries](https://angular.io/start/resources "Angular resources list"). Last reviewed on Wed Sep 15 2021 angular Managing data Managing data ============= This guide builds on the second step of the [Getting started with a basic Angular application](https://angular.io/start/start) tutorial, [Adding navigation](https://angular.io/start/start/start-routing "Adding navigation"). At this stage of development, the store application has a product catalog with two views: a product list and product details. Users can click on a product name from the list to see details in a new view, with a distinct URL, or route. This step of the tutorial guides you through creating a shopping cart in the following phases: * Update the product details view to include a **Buy** button, which adds the current product to a list of products that a cart service manages * Add a cart component, which displays the items in the cart * Add a shipping component, which retrieves shipping prices for the items in the cart by using Angular's `[HttpClient](../api/common/http/httpclient)` to retrieve shipping data from a `.json` file Create the shopping cart service -------------------------------- In Angular, a service is an instance of a class that you can make available to any part of your application using Angular's [dependency injection system](../guide/glossary#dependency-injection-di "Dependency injection definition"). Currently, users can view product information, and the application can simulate sharing and notifications about product changes. The next step is to build a way for users to add products to a cart. This section walks you through adding a **Buy** button and setting up a cart service to store information about products in the cart. ### Define a cart service This section walks you through creating the `CartService` that tracks products added to shopping cart. 1. In the terminal generate a new `cart` service by running the following command: ``` ng generate service cart ``` 2. Import the `Product` interface from `./products.ts` into the `cart.service.ts` file, and in the `CartService` class, define an `items` property to store the array of the current products in the cart. ``` import { Product } from './products'; import { Injectable } from '@angular/core'; /* . . . */ @Injectable({ providedIn: 'root' }) export class CartService { items: Product[] = []; /* . . . */ } ``` 3. Define methods to add items to the cart, return cart items, and clear the cart items. ``` @Injectable({ providedIn: 'root' }) export class CartService { items: Product[] = []; /* . . . */ addToCart(product: Product) { this.items.push(product); } getItems() { return this.items; } clearCart() { this.items = []; return this.items; } /* . . . */ } ``` * The `addToCart()` method appends a product to an array of `items` * The `getItems()` method collects the items users add to the cart and returns each item with its associated quantity * The `clearCart()` method returns an empty array of items, which empties the cart ### Use the cart service This section walks you through using the `CartService` to add a product to the cart. 1. In `product-details.component.ts`, import the cart service. ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { Product, products } from '../products'; import { CartService } from '../cart.service'; ``` 2. Inject the cart service by adding it to the `constructor()`. ``` export class ProductDetailsComponent implements OnInit { constructor( private route: ActivatedRoute, private cartService: CartService ) { } } ``` 3. Define the `addToCart()` method, which adds the current product to the cart. ``` export class ProductDetailsComponent implements OnInit { addToCart(product: Product) { this.cartService.addToCart(product); window.alert('Your product has been added to the cart!'); } } ``` The `addToCart()` method does the following: * Takes the current `product` as an argument * Uses the `CartService` `addToCart()` method to add the product to the cart * Displays a message that you've added a product to the cart 4. In `product-details.component.html`, add a button with the label **Buy**, and bind the `click()` event to the `addToCart()` method. This code updates the product details template with a **Buy** button that adds the current product to the cart. ``` <h2>Product Details</h2> <div *ngIf="product"> <h3>{{ product.name }}</h3> <h4>{{ product.price | currency }}</h4> <p>{{ product.description }}</p> <button type="button" (click)="addToCart(product)">Buy</button> </div> ``` 5. Verify that the new **Buy** button appears as expected by refreshing the application and clicking on a product's name to display its details. 6. Click the **Buy** button to add the product to the stored list of items in the cart and display a confirmation message. Create the cart view -------------------- For customers to see their cart, you can create the cart view in two steps: 1. Create a cart component and configure routing to the new component. 2. Display the cart items. ### Set up the cart component To create the cart view, follow the same steps you did to create the `ProductDetailsComponent` and configure routing for the new component. 1. Generate a new component named `cart` in the terminal by running the following command: ``` ng generate component cart ``` This command will generate the `cart.component.ts` file and its associated template and styles files. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-cart', templateUrl: './cart.component.html', styleUrls: ['./cart.component.css'] }) export class CartComponent { } ``` 2. Notice that the newly created `CartComponent` is added to the module's `declarations` in `app.module.ts`. ``` import { CartComponent } from './cart/cart.component'; @NgModule({ declarations: [ AppComponent, TopBarComponent, ProductListComponent, ProductAlertsComponent, ProductDetailsComponent, CartComponent, ], ``` 3. Still in `app.module.ts`, add a route for the component `CartComponent`, with a `path` of `cart`. ``` @NgModule({ imports: [ BrowserModule, ReactiveFormsModule, RouterModule.forRoot([ { path: '', component: ProductListComponent }, { path: 'products/:productId', component: ProductDetailsComponent }, { path: 'cart', component: CartComponent }, ]) ], ``` 4. Update the **Checkout** button so that it routes to the `/cart` URL. In `top-bar.component.html`, add a `[routerLink](../api/router/routerlink)` directive pointing to `/cart`. ``` <a routerLink="/cart" class="button fancy-button"> <i class="material-icons">shopping_cart</i>Checkout </a> ``` 5. Verify the new `CartComponent` works as expected by clicking the **Checkout** button. You can see the "cart works!" default text, and the URL has the pattern `https://getting-started.stackblitz.io/cart`, where `getting-started.stackblitz.io` may be different for your StackBlitz project. ### Display the cart items This section shows you how to use the cart service to display the products in the cart. 1. In `cart.component.ts`, import the `CartService` from the `cart.service.ts` file. ``` import { Component } from '@angular/core'; import { CartService } from '../cart.service'; ``` 2. Inject the `CartService` so that the `CartComponent` can use it by adding it to the `constructor()`. ``` export class CartComponent { constructor( private cartService: CartService ) { } } ``` 3. Define the `items` property to store the products in the cart. ``` export class CartComponent { items = this.cartService.getItems(); constructor( private cartService: CartService ) { } } ``` This code sets the items using the `CartService` `getItems()` method. You defined this method [when you created `cart.service.ts`](https://angular.io/start/start/start-data#generate-cart-service). 4. Update the cart template with a header, and use a `<div>` with an `*[ngFor](../api/common/ngfor)` to display each of the cart items with its name and price. The resulting `CartComponent` template is as follows. ``` <h3>Cart</h3> <div class="cart-item" *ngFor="let item of items"> <span>{{ item.name }}</span> <span>{{ item.price | currency }}</span> </div> ``` 5. Verify that your cart works as expected: 1. Click **My Store**. 2. Click on a product name to display its details. 3. Click **Buy** to add the product to the cart. 4. Click **Checkout** to see the cart. For more information about services, see [Introduction to Services and Dependency Injection](../guide/architecture-services "Concepts > Intro to Services and DI"). Retrieve shipping prices ------------------------ Servers often return data in the form of a stream. Streams are useful because they make it easy to transform the returned data and make modifications to the way you request that data. Angular `[HttpClient](../api/common/http/httpclient)` is a built-in way to fetch data from external APIs and provide them to your application as a stream. This section shows you how to use `[HttpClient](../api/common/http/httpclient)` to retrieve shipping prices from an external file. The application that StackBlitz generates for this guide comes with predefined shipping data in `assets/shipping.json`. Use this data to add shipping prices for items in the cart. ``` [ { "type": "Overnight", "price": 25.99 }, { "type": "2-Day", "price": 9.99 }, { "type": "Postal", "price": 2.99 } ] ``` ### Configure `AppModule` to use `[HttpClient](../api/common/http/httpclient)` To use Angular's `[HttpClient](../api/common/http/httpclient)`, you must configure your application to use `[HttpClientModule](../api/common/http/httpclientmodule)`. Angular's `[HttpClientModule](../api/common/http/httpclientmodule)` registers the providers your application needs to use the `[HttpClient](../api/common/http/httpclient)` service throughout your application. 1. In `app.module.ts`, import `[HttpClientModule](../api/common/http/httpclientmodule)` from the `@angular/common/[http](../api/common/http)` package at the top of the file with the other imports. As there are a number of other imports, this code snippet omits them for brevity. Be sure to leave the existing imports in place. ``` import { HttpClientModule } from '@angular/common/http'; ``` 2. To register Angular's `[HttpClient](../api/common/http/httpclient)` providers globally, add `[HttpClientModule](../api/common/http/httpclientmodule)` to the `AppModule` `@[NgModule](../api/core/ngmodule)()` `imports` array. ``` @NgModule({ imports: [ BrowserModule, HttpClientModule, ReactiveFormsModule, RouterModule.forRoot([ { path: '', component: ProductListComponent }, { path: 'products/:productId', component: ProductDetailsComponent }, { path: 'cart', component: CartComponent }, ]) ], declarations: [ AppComponent, TopBarComponent, ProductListComponent, ProductAlertsComponent, ProductDetailsComponent, CartComponent, ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` ### Configure `CartService` to use `[HttpClient](../api/common/http/httpclient)` The next step is to inject the `[HttpClient](../api/common/http/httpclient)` service into your service so your application can fetch data and interact with external APIs and resources. 1. In `cart.service.ts`, import `[HttpClient](../api/common/http/httpclient)` from the `@angular/common/[http](../api/common/http)` package. ``` import { HttpClient } from '@angular/common/http'; import { Product } from './products'; import { Injectable } from '@angular/core'; ``` 2. Inject `[HttpClient](../api/common/http/httpclient)` into the `CartService` `constructor()`. ``` @Injectable({ providedIn: 'root' }) export class CartService { items: Product[] = []; constructor( private http: HttpClient ) {} /* . . . */ } ``` ### Configure `CartService` to get shipping prices To get shipping data, from `shipping.json`, You can use the `[HttpClient](../api/common/http/httpclient)` `get()` method. 1. In `cart.service.ts`, below the `clearCart()` method, define a new `getShippingPrices()` method that uses the `[HttpClient](../api/common/http/httpclient)` `get()` method. ``` @Injectable({ providedIn: 'root' }) export class CartService { /* . . . */ getShippingPrices() { return this.http.get<{type: string, price: number}[]>('/assets/shipping.json'); } } ``` For more information about Angular's `[HttpClient](../api/common/http/httpclient)`, see the [Client-Server Interaction](../guide/http "Server interaction through HTTP") guide. Create a shipping component --------------------------- Now that you've configured your application to retrieve shipping data, you can create a place to render that data. 1. Generate a cart component named `shipping` in the terminal by running the following command: ``` ng generate component shipping ``` This command will generate the `shipping.component.ts` file and it associated template and styles files. ``` import { Component } from '@angular/core'; @Component({ selector: 'app-shipping', templateUrl: './shipping.component.html', styleUrls: ['./shipping.component.css'] }) export class ShippingComponent { } ``` 2. In `app.module.ts`, add a route for shipping. Specify a `path` of `shipping` and a component of `ShippingComponent`. ``` @NgModule({ imports: [ BrowserModule, HttpClientModule, ReactiveFormsModule, RouterModule.forRoot([ { path: '', component: ProductListComponent }, { path: 'products/:productId', component: ProductDetailsComponent }, { path: 'cart', component: CartComponent }, { path: 'shipping', component: ShippingComponent }, ]) ], declarations: [ AppComponent, TopBarComponent, ProductListComponent, ProductAlertsComponent, ProductDetailsComponent, CartComponent, ShippingComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` There's no link to the new shipping component yet, but you can see its template in the preview pane by entering the URL its route specifies. The URL has the pattern: `https://angular-ynqttp--4200.local.webcontainer.io/shipping` where the `angular-ynqttp--4200.local.webcontainer.io` part may be different for your StackBlitz project. ### Configuring the `ShippingComponent` to use `CartService` This section guides you through modifying the `ShippingComponent` to retrieve shipping data via HTTP from the `shipping.json` file. 1. In `shipping.component.ts`, import `CartService`. ``` import { Component, OnInit } from '@angular/core'; import { Observable } from 'rxjs'; import { CartService } from '../cart.service'; ``` 2. Inject the cart service in the `ShippingComponent` `constructor()`. ``` constructor(private cartService: CartService) { } ``` 3. Define a `shippingCosts` property that sets the `shippingCosts` property using the `getShippingPrices()` method from the `CartService`. Initialize the `shippingCosts` property inside `ngOnInit()` method. ``` export class ShippingComponent implements OnInit { shippingCosts!: Observable<{ type: string, price: number }[]>; ngOnInit(): void { this.shippingCosts = this.cartService.getShippingPrices(); } } ``` 4. Update the `ShippingComponent` template to display the shipping types and prices using the `[async](../api/common/asyncpipe)` pipe. ``` <h3>Shipping Prices</h3> <div class="shipping-item" *ngFor="let shipping of shippingCosts | async"> <span>{{ shipping.type }}</span> <span>{{ shipping.price | currency }}</span> </div> ``` The `[async](../api/common/asyncpipe)` pipe returns the latest value from a stream of data and continues to do so for the life of a given component. When Angular destroys that component, the `[async](../api/common/asyncpipe)` pipe automatically stops. For detailed information about the `[async](../api/common/asyncpipe)` pipe, see the [AsyncPipe API documentation](../api/common/asyncpipe). 5. Add a link from the `CartComponent` view to the `ShippingComponent` view. ``` <h3>Cart</h3> <p> <a routerLink="/shipping">Shipping Prices</a> </p> <div class="cart-item" *ngFor="let item of items"> <span>{{ item.name }}</span> <span>{{ item.price | currency }}</span> </div> ``` 6. Click the **Checkout** button to see the updated cart. Remember that changing the application causes the preview to refresh, which empties the cart. Click on the link to navigate to the shipping prices. What's next ----------- You now have a store application with a product catalog, a shopping cart, and you can look up shipping prices. To continue exploring Angular: * Continue to [Forms for User Input](https://angular.io/start/start/start-forms "Forms for User Input") to finish the application by adding the shopping cart view and a checkout form * Skip ahead to [Deployment](https://angular.io/start/start/start-deployment "Deployment") to move to local development, or deploy your application to Firebase or your own server Last reviewed on Mon Feb 28 2022
programming_docs
angular Using forms for user input Using forms for user input ========================== This guide builds on the [Managing Data](https://angular.io/start/start/start-data "Try it: Managing Data") step of the Getting Started tutorial, [Get started with a basic Angular app](https://angular.io/start/start "Get started with a basic Angular app"). This section walks you through adding a form-based checkout feature to collect user information as part of checkout. Define the checkout form model ------------------------------ This step shows you how to set up the checkout form model in the component class. The form model determines the status of the form. 1. Open `cart.component.ts`. 2. Import the `[FormBuilder](../api/forms/formbuilder)` service from the `@angular/forms` package. This service provides convenient methods for generating controls. ``` import { Component } from '@angular/core'; import { FormBuilder } from '@angular/forms'; import { CartService } from '../cart.service'; ``` 3. Inject the `[FormBuilder](../api/forms/formbuilder)` service in the `CartComponent` `constructor()`. This service is part of the `[ReactiveFormsModule](../api/forms/reactiveformsmodule)` module, which you've already imported. ``` export class CartComponent { constructor( private cartService: CartService, private formBuilder: FormBuilder, ) {} } ``` 4. To gather the user's name and address, use the `[FormBuilder](../api/forms/formbuilder)` `group()` method to set the `checkoutForm` property to a form model containing `name` and `address` fields. ``` export class CartComponent { items = this.cartService.getItems(); checkoutForm = this.formBuilder.group({ name: '', address: '' }); constructor( private cartService: CartService, private formBuilder: FormBuilder, ) {} } ``` 5. Define an `onSubmit()` method to process the form. This method allows users to submit their name and address. In addition, this method uses the `clearCart()` method of the `CartService` to reset the form and clear the cart. The entire cart component class is as follows: ``` import { Component } from '@angular/core'; import { FormBuilder } from '@angular/forms'; import { CartService } from '../cart.service'; @Component({ selector: 'app-cart', templateUrl: './cart.component.html', styleUrls: ['./cart.component.css'] }) export class CartComponent { items = this.cartService.getItems(); checkoutForm = this.formBuilder.group({ name: '', address: '' }); constructor( private cartService: CartService, private formBuilder: FormBuilder, ) {} onSubmit(): void { // Process checkout data here this.items = this.cartService.clearCart(); console.warn('Your order has been submitted', this.checkoutForm.value); this.checkoutForm.reset(); } } ``` Create the checkout form ------------------------ Use the following steps to add a checkout form at the bottom of the Cart view. 1. At the bottom of `cart.component.html`, add an HTML `<form>` element and a **Purchase** button. 2. Use a `formGroup` property binding to bind `checkoutForm` to the HTML `<form>`. ``` <form [formGroup]="checkoutForm"> <button class="button" type="submit">Purchase</button> </form> ``` 3. On the `form` tag, use an `ngSubmit` event binding to listen for the form submission and call the `onSubmit()` method with the `checkoutForm` value. ``` <form [formGroup]="checkoutForm" (ngSubmit)="onSubmit()"> </form> ``` 4. Add `<input>` fields for `name` and `address`, each with a `[formControlName](../api/forms/formcontrolname)` attribute that binds to the `checkoutForm` form controls for `name` and `address` to their `<input>` fields. The complete component is as follows: ``` <h3>Cart</h3> <p> <a routerLink="/shipping">Shipping Prices</a> </p> <div class="cart-item" *ngFor="let item of items"> <span>{{ item.name }} </span> <span>{{ item.price | currency }}</span> </div> <form [formGroup]="checkoutForm" (ngSubmit)="onSubmit()"> <div> <label for="name"> Name </label> <input id="name" type="text" formControlName="name"> </div> <div> <label for="address"> Address </label> <input id="address" type="text" formControlName="address"> </div> <button class="button" type="submit">Purchase</button> </form> ``` After putting a few items in the cart, users can review their items, enter their name and address, and submit their purchase. To confirm submission, open the console to see an object containing the name and address you submitted. What's next ----------- You have a complete online store application with a product catalog, a shopping cart, and a checkout function. [Continue to the "Deployment" section](https://angular.io/start/start/start-deployment "Try it: Deployment") to move to local development, or deploy your app to Firebase or your own server. Last reviewed on Wed Sep 15 2021 angular Adding navigation Adding navigation ================= This guide builds on the first step of the Getting Started tutorial, [Get started with a basic Angular app](https://angular.io/start/start "Get started with a basic Angular app"). At this stage of development, the online store application has a basic product catalog. In the following sections, you'll add the following features to the application: * Type a URL in the address bar to navigate to a corresponding product page * Click links on the page to navigate within your single-page application * Click the browser's back and forward buttons to navigate the browser history intuitively Associate a URL path with a component ------------------------------------- The application already uses the Angular `[Router](../api/router/router)` to navigate to the `ProductListComponent`. This section shows you how to define a route to show individual product details. 1. Generate a new component for product details. In the terminal generate a new `product-details` component by running the following command: ``` ng generate component product-details ``` 2. In `app.module.ts`, add a route for product details, with a `path` of `products/:productId` and `ProductDetailsComponent` for the `component`. ``` @NgModule({ imports: [ BrowserModule, ReactiveFormsModule, RouterModule.forRoot([ { path: '', component: ProductListComponent }, { path: 'products/:productId', component: ProductDetailsComponent }, ]) ], declarations: [ AppComponent, TopBarComponent, ProductListComponent, ProductAlertsComponent, ProductDetailsComponent, ], ``` 3. Open `product-list.component.html`. 4. Modify the product name anchor to include a `[routerLink](../api/router/routerlink)` with the `product.id` as a parameter. ``` <div *ngFor="let product of products"> <h3> <a [title]="product.name + ' details'" [routerLink]="['/products', product.id]"> {{ product.name }} </a> </h3> <!-- . . . --> </div> ``` The `[RouterLink](../api/router/routerlink)` directive helps you customize the anchor element. In this case, the route, or URL, contains one fixed segment, `/products`. The final segment is variable, inserting the `id` property of the current product. For example, the URL for a product with an `id` of 1 would be similar to `https://getting-started-myfork.stackblitz.io/products/1`. 5. Verify that the router works as intended by clicking the product name. The application should display the `ProductDetailsComponent`, which currently says "product-details works!" Notice that the URL in the preview window changes. The final segment is `products/#` where `#` is the number of the route you clicked. View product details -------------------- The `ProductDetailsComponent` handles the display of each product. The Angular Router displays components based on the browser's URL and [your defined routes](https://angular.io/start/start/start-routing#define-routes). In this section, you'll use the Angular Router to combine the `products` data and route information to display the specific details for each product. 1. In `product-details.component.ts`, import `[ActivatedRoute](../api/router/activatedroute)` from `@angular/router`, and the `products` array from `../products`. ``` import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { Product, products } from '../products'; ``` 2. Define the `product` property. ``` export class ProductDetailsComponent implements OnInit { product: Product | undefined; /* ... */ } ``` 3. Inject `[ActivatedRoute](../api/router/activatedroute)` into the `constructor()` by adding `private route: [ActivatedRoute](../api/router/activatedroute)` as an argument within the constructor's parentheses. ``` export class ProductDetailsComponent implements OnInit { product: Product | undefined; constructor(private route: ActivatedRoute) { } } ``` `[ActivatedRoute](../api/router/activatedroute)` is specific to each component that the Angular Router loads. `[ActivatedRoute](../api/router/activatedroute)` contains information about the route and the route's parameters. By injecting `[ActivatedRoute](../api/router/activatedroute)`, you are configuring the component to use a service. The [Managing Data](https://angular.io/start/start/start-data "Try it: Managing Data") step covers services in more detail. 4. In the `ngOnInit()` method, extract the `productId` from the route parameters and find the corresponding product in the `products` array. ``` ngOnInit() { // First get the product id from the current route. const routeParams = this.route.snapshot.paramMap; const productIdFromRoute = Number(routeParams.get('productId')); // Find the product that correspond with the id provided in route. this.product = products.find(product => product.id === productIdFromRoute); } ``` The route parameters correspond to the path variables you define in the route. To access the route parameters, we use `route.snapshot`, which is the `[ActivatedRouteSnapshot](../api/router/activatedroutesnapshot)` that contains information about the active route at that particular moment in time. The URL that matches the route provides the `productId` . Angular uses the `productId` to display the details for each unique product. 5. Update the `ProductDetailsComponent` template to display product details with an `*[ngIf](../api/common/ngif)`. If a product exists, the `<div>` renders with a name, price, and description. ``` <h2>Product Details</h2> <div *ngIf="product"> <h3>{{ product.name }}</h3> <h4>{{ product.price | currency }}</h4> <p>{{ product.description }}</p> </div> ``` The line, `<h4>{{ product.price | [currency](../api/common/currencypipe) }}</h4>`, uses the `[currency](../api/common/currencypipe)` pipe to transform `product.price` from a number to a currency string. A pipe is a way you can transform data in your HTML template. For more information about Angular pipes, see [Pipes](../guide/pipes "Pipes"). When users click on a name in the product list, the router navigates them to the distinct URL for the product, shows the `ProductDetailsComponent`, and displays the product details. For more information about the Angular Router, see [Routing & Navigation](../guide/router "Routing & Navigation guide"). What's next ----------- You have configured your application so you can view product details, each with a distinct URL. To continue exploring Angular: * Continue to [Managing Data](https://angular.io/start/start/start-data "Try it: Managing Data") to add a shopping cart feature, manage cart data, and retrieve external data for shipping prices * Skip ahead to [Deployment](https://angular.io/start/start/start-deployment "Try it: Deployment") to deploy your application to Firebase or move to local development Last reviewed on Mon Feb 28 2022 angular ng build Compiles an Angular application or library into an output directory named dist/ at the given output path. ### `ng build [project]` ### `ng b [project]` #### Description The command can be used to build a project of type "application" or "library". When used to build a library, a different builder is invoked, and only the `ts-config`, `configuration`, and `watch` options are applied. All other options apply only to building applications. The application builder uses the [webpack](https://webpack.js.org/) build tool, with default configuration options specified in the workspace configuration file (`angular.json`) or with a named alternative configuration. A "development" configuration is created by default when you use the CLI to create the project, and you can use that configuration by specifying the `--configuration development`. The configuration options generally correspond to the command options. You can override individual configuration defaults by specifying the corresponding options on the command line. The command can accept option names given in either dash-case or camelCase. Note that in the configuration file, you must specify names in camelCase. Some additional options can only be set through the configuration file, either by direct editing or with the `ng config` command. These include `assets`, `styles`, and `scripts` objects that provide runtime-global resources to include in the project. Resources in CSS, such as images and fonts, are automatically written and fingerprinted at the root of the output folder. For further details, see [Workspace Configuration](../guide/workspace-config). Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `project` | The name of the project to build. Can be an application or a library. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--allowed-common-js-dependencies` | A list of CommonJS packages that are allowed to be used without a build time warning. | `array` | | | `--aot` | Build using Ahead of Time compilation. | `boolean` | `true` | | `--base-href` | Base url for the application being built. | `string` | | | `--build-optimizer` | Enables advanced build optimizations when using the 'aot' option. | `boolean` | `true` | | `--common-chunk` | Generate a seperate bundle containing code used across multiple bundles. | `boolean` | `true` | | `--configuration` | One or more named builder configurations as a comma-separated list as specified in the "configurations" section in angular.json. The builder uses the named configurations to run the given target. For more information, see [https://angular.io/guide/workspace-config#alternate-build-configurations](../guide/workspace-config#alternate-build-configurations). Aliases: -c | `string` | | | `--cross-origin` | Define the crossorigin attribute setting of elements that provide CORS support. | `none | anonymous | use-credentials` | `none` | | `--delete-output-path` | Delete the output path before building. | `boolean` | `true` | | `--deploy-url` | **Deprecated:** Use "baseHref" option, "APP\_BASE\_HREF" DI token or a combination of both instead. For more information, see [https://angular.io/guide/deployment#the-deploy-url](../guide/deployment#the-deploy-url). URL where files will be deployed. | `string` | | | `--extract-licenses` | Extract all licenses in a separate file. | `boolean` | `true` | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--i18n-duplicate-translation` | How to handle duplicate translations for i18n. | `warning | error | ignore` | `warning` | | `--i18n-missing-translation` | How to handle missing translations for i18n. | `warning | error | ignore` | `warning` | | `--index` | Configures the generation of the application's HTML index. | `string` | | | `--inline-style-language` | The stylesheet language to use for the application's inline component styles. | `css | less | sass | scss` | `css` | | `--localize` | Translate the bundles in one or more locales. | `boolean` | | | `--main` | The full path for the main entry point to the app, relative to the current workspace. | `string` | | | `--named-chunks` | Use file name for lazy loaded chunks. | `boolean` | `false` | | `--ngsw-config-path` | Path to ngsw-config.json. | `string` | | | `--optimization` | Enables optimization of the build output. Including minification of scripts and styles, tree-shaking, dead-code elimination, inlining of critical CSS and fonts inlining. For more information, see [https://angular.io/guide/workspace-config#optimization-configuration](../guide/workspace-config#optimization-configuration). | `boolean` | `true` | | `--output-hashing` | Define the output filename cache-busting hashing mode. | `none | all | media | bundles` | `none` | | `--output-path` | The full path for the new output directory, relative to the current workspace. By default, writes output to a folder named dist/ in the current project. | `string` | | | `--poll` | Enable and define the file watching poll time period in milliseconds. | `number` | | | `--polyfills` | Polyfills to be included in the build. | `string` | | | `--preserve-symlinks` | Do not use the real path when resolving modules. If unset then will default to `true` if NodeJS option --preserve-symlinks is set. | `boolean` | | | `--progress` | Log progress to the console while building. | `boolean` | `true` | | `--resources-output-path` | The path where style resources will be placed, relative to outputPath. | `string` | | | `--service-worker` | Generates a service worker config for production builds. | `boolean` | `false` | | `--source-map` | Output source maps for scripts and styles. For more information, see [https://angular.io/guide/workspace-config#source-map-configuration](../guide/workspace-config#source-map-configuration). | `boolean` | `false` | | `--stats-json` | Generates a 'stats.json' file which can be analyzed using tools such as 'webpack-bundle-analyzer'. | `boolean` | `false` | | `--subresource-integrity` | Enables the use of subresource integrity validation. | `boolean` | `false` | | `--ts-config` | The full path for the TypeScript configuration file, relative to the current workspace. | `string` | | | `--vendor-chunk` | Generate a seperate bundle containing only vendor libraries. This option should only be used for development to reduce the incremental compilation time. | `boolean` | `false` | | `--verbose` | Adds more details to output logging. | `boolean` | `false` | | `--watch` | Run build when files change. | `boolean` | `false` | | `--web-worker-ts-config` | TypeScript configuration for Web Worker modules. | `string` | | angular ng completion Set up Angular CLI autocompletion for your terminal. ### `ng completion` #### Description Setting up autocompletion configures your terminal, so pressing the `<TAB>` key while in the middle of typing will display various commands and options available to you. This makes it very easy to discover and use CLI commands without lots of memorization. ![A demo of Angular CLI autocompletion in a terminal. The user types several partial `ng` commands, using autocompletion to finish several arguments and list contextual options. ](https://angular.io/generated/images/guide/cli/completion.gif) Automated setup --------------- The CLI should prompt and ask to set up autocompletion for you the first time you use it (v14+). Simply answer "Yes" and the CLI will take care of the rest. ``` $ ng serve ? Would you like to enable autocompletion? This will set up your terminal so pressing TAB while typing Angular CLI commands will show possible options and autocomplete arguments. (Enabling autocompletion will modify configuration files in your home directory.) Yes Appended `source <(ng completion script)` to `/home/my-username/.bashrc`. Restart your terminal or run: source <(ng completion script) to autocomplete `ng` commands. # Serve output... ``` If you already refused the prompt, it won't ask again. But you can run `ng completion` to do the same thing automatically. This modifies your terminal environment to load Angular CLI autocompletion, but can't update your current terminal session. Either restart it or run `source <(ng completion script)` directly to enable autocompletion in your current session. Test it out by typing `ng ser<TAB>` and it should autocomplete to `ng serve`. Ambiguous arguments will show all possible options and their documentation, such as `ng generate <TAB>`. Manual setup ------------ Some users may have highly customized terminal setups, possibly with configuration files checked into source control with an opinionated structure. `ng completion` only ever appends Angular's setup to an existing configuration file for your current shell, or creates one if none exists. If you want more control over exactly where this configuration lives, you can manually set it up by having your shell run at startup: ``` source <(ng completion script) ``` This is equivalent to what `ng completion` will automatically set up, and gives power users more flexibility in their environments when desired. Platform support ---------------- Angular CLI supports autocompletion for the Bash and Zsh shells on MacOS and Linux operating systems. On Windows, Git Bash and [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/) using Bash or Zsh are supported. Global install -------------- Autocompletion works by configuring your terminal to invoke the Angular CLI on startup to load the setup script. This means the terminal must be able to find and execute the Angular CLI, typically through a global install that places the binary on the user's `$PATH`. If you get `command not found: ng`, make sure the CLI is installed globally which you can do with the `-g` flag: ``` npm install -g @angular/cli ``` This command has the following [sub-commands](https://angular.io/cli/cli/completion#completion-commands): * [script](https://angular.io/cli/cli/completion#script-command) Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--help` | Shows a help message for this command in the console. | `boolean` | | Completion commands -------------------- #### script ### `ng completion script` Generate a bash and zsh real-time type-ahead autocompletion script.
programming_docs
angular ng run Runs an Architect target with an optional custom builder configuration defined in your project. ### `ng run <target>` #### Description Architect is the tool that the CLI uses to perform complex tasks such as compilation, according to provided configurations. The CLI commands run Architect targets such as `build`, `serve`, `test`, and `lint`. Each named target has a default configuration, specified by an `options` object, and an optional set of named alternate configurations in the `configurations` object. For example, the `serve` target for a newly generated app has a predefined alternate configuration named `production`. You can define new targets and their configuration options in the `architect` section of the `angular.json` file which you can run them from the command line using the `ng run` command. Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `target` | The Architect target to run provided in the the following format `project:target[:configuration]`. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--help` | Shows a help message for this command in the console. | `boolean` | | angular ng update Updates your workspace and its dependencies. See <https://update.angular.io/>. ### `ng update [packages..]` #### Description Perform a basic update to the current stable release of the core framework and CLI by running the following command. ``` ng update @angular/cli @angular/core ``` To update to the next beta or pre-release version, use the `--next` option. To update from one major version to another, use the format ``` ng update @angular/cli@^<major_version> @angular/core@^<major_version> ``` We recommend that you always update to the latest patch version, as it contains fixes we released since the initial major release. For example, use the following command to take the latest 10.x.x version and use that to update. ``` ng update @angular/cli@^10 @angular/core@^10 ``` For detailed information and guidance on updating your application, see the interactive [Angular Update Guide](https://update.angular.io/). Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `packages` | The names of package(s) to update. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--allow-dirty` | Whether to allow updating when the repository contains modified or untracked files. | `boolean` | `false` | | `--create-commits` | Create source control commits for updates and migrations. Aliases: -C | `boolean` | `false` | | `--force` | Ignore peer dependency version mismatches. | `boolean` | `false` | | `--from` | Version from which to migrate from. Only available with a single package being updated, and only with 'migrate-only'. | `string` | | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--migrate-only` | Only perform a migration, do not update the installed version. | `boolean` | | | `--name` | The name of the migration to run. Only available with a single package being updated, and only with 'migrate-only' option. | `string` | | | `--next` | Use the prerelease version, including beta and RCs. | `boolean` | `false` | | `--to` | Version up to which to apply migrations. Only available with a single package being updated, and only with 'migrate-only' option. Requires 'from' to be specified. Default to the installed version detected. | `string` | | | `--verbose` | Display additional details about internal operations during execution. | `boolean` | `false` | angular ng extract-i18n Extracts i18n messages from source code. ### `ng extract-i18n [project]` #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `project` | The name of the project to build. Can be an application or a library. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--browser-target` | A browser builder target to extract i18n messages in the format of `project:target[:configuration]`. You can also pass in more than one configuration name as a comma-separated list. Example: `project:target:production,staging`. | `string` | | | `--configuration` | One or more named builder configurations as a comma-separated list as specified in the "configurations" section in angular.json. The builder uses the named configurations to run the given target. For more information, see [https://angular.io/guide/workspace-config#alternate-build-configurations](../guide/workspace-config#alternate-build-configurations). Aliases: -c | `string` | | | `--format` | Output format for the generated file. | `xmb | xlf | xlif | xliff | xlf2 | xliff2 | json | arb | legacy-migrate` | `xlf` | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--out-file` | Name of the file to output. | `string` | | | `--output-path` | Path where output will be placed. | `string` | | | `--progress` | Log progress to the console. | `boolean` | `true` | angular ng add Adds support for an external library to your project. ### `ng add <collection>` #### Description Adds the npm package for a published library to your workspace, and configures the project in the current working directory to use that library, as specified by the library's schematic. For example, adding `@angular/pwa` configures your project for PWA support: ``` ng add @angular/pwa ``` Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `collection` | The package to be added. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--defaults` | Disable interactive input prompts for options with a default. | `boolean` | `false` | | `--dry-run` | Run through and reports activity without writing out results. | `boolean` | `false` | | `--force` | Force overwriting of existing files. | `boolean` | `false` | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--interactive` | Enable interactive input prompts. | `boolean` | `true` | | `--registry` | The NPM registry to use. | `string` | | | `--skip-confirmation` | Skip asking a confirmation prompt before installing and executing the package. Ensure package name is correct prior to using this option. | `boolean` | `false` | | `--verbose` | Display additional details about internal operations during execution. | `boolean` | `false` | angular ng doc Opens the official Angular documentation (angular.io) in a browser, and searches for a given keyword. ### `ng doc <keyword>` ### `ng d <keyword>` #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `keyword` | The keyword to search for, as provided in the search bar in angular.io. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--search` | Search all of angular.io. Otherwise, searches only API reference documentation. Aliases: -s | `boolean` | `false` | | `--version` | Contains the version of Angular to use for the documentation. If not provided, the command uses your current Angular core version. | `string` | | angular ng cache Configure persistent disk cache and retrieve cache statistics. ### `ng cache` #### Description Angular CLI saves a number of cachable operations on disk by default. When you re-run the same build, the build system restores the state of the previous build and re-uses previously performed operations, which decreases the time taken to build and test your applications and libraries. To amend the default cache settings, add the `cli.cache` object to your [Workspace Configuration](../guide/workspace-config). The object goes under `cli.cache` at the top level of the file, outside the `projects` sections. ``` { "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "version": 1, "cli": { "cache": { // ... } }, "projects": {} } ``` For more information, see [cache options](../guide/workspace-config#cache-options). #### Cache environments By default, disk cache is only enabled for local environments. The value of environment can be one of the following: * `all` - allows disk cache on all machines. * `local` - allows disk cache only on development machines. * `ci` - allows disk cache only on continuous integration (CI) systems. To change the environment setting to `all`, run the following command: ``` ng config cli.cache.environment all ``` For more information, see `environment` in [cache options](../guide/workspace-config#cache-options). > The Angular CLI checks for the presence and value of the `CI` environment variable to determine in which environment it is running. > > #### Cache path By default, `.angular/cache` is used as a base directory to store cache results. To change this path to `.cache/ng`, run the following command: ``` ng config cli.cache.path ".cache/ng" ``` This command has the following [sub-commands](https://angular.io/cli/cli/cache#cache-commands): * [clean](https://angular.io/cli/cli/cache#clean-command) * [disable](https://angular.io/cli/cli/cache#disable-command) * [enable](https://angular.io/cli/cli/cache#enable-command) * [info](https://angular.io/cli/cli/cache#info-command) Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--help` | Shows a help message for this command in the console. | `boolean` | | Cache commands --------------- #### clean ### `ng cache clean` Deletes persistent disk cache from disk. #### disable ### `ng cache disable` ### `ng cache off` Disables persistent disk cache for all projects in the workspace. #### enable ### `ng cache enable` ### `ng cache on` Enables disk cache for all projects in the workspace. #### info ### `ng cache info` Prints persistent disk cache configuration and statistics in the console. angular ng lint Runs linting tools on Angular application code in a given project folder. ### `ng lint [project]` #### Description The command takes an optional project name, as specified in the `projects` section of the `angular.json` workspace configuration file. When a project name is not supplied, executes the `lint` builder for all projects. To use the `ng lint` command, use `ng add` to add a package that implements linting capabilities. Adding the package automatically updates your workspace configuration, adding a lint [CLI builder](../guide/cli-builder). For example: ``` "projects": { "my-project": { ... "architect": { ... "lint": { "builder": "@angular-eslint/builder:lint", "options": {} } } } } ``` Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `project` | The name of the project to build. Can be an application or a library. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--configuration` | One or more named builder configurations as a comma-separated list as specified in the "configurations" section in angular.json. The builder uses the named configurations to run the given target. For more information, see [https://angular.io/guide/workspace-config#alternate-build-configurations](../guide/workspace-config#alternate-build-configurations). Aliases: -c | `string` | | | `--help` | Shows a help message for this command in the console. | `boolean` | | angular ng generate Generates and/or modifies files based on a schematic. ### `ng generate <schematic>` ### `ng g <schematic>` This command has the following [sub-commands](https://angular.io/cli/cli/generate#generate-commands): * [app-shell](https://angular.io/cli/cli/generate#app-shell-command) * [application](https://angular.io/cli/cli/generate#application-command) * [class](https://angular.io/cli/cli/generate#class-command) * [component](https://angular.io/cli/cli/generate#component-command) * [config](https://angular.io/cli/cli/generate#config-command) * [directive](https://angular.io/cli/cli/generate#directive-command) * [enum](https://angular.io/cli/cli/generate#enum-command) * [environments](https://angular.io/cli/cli/generate#environments-command) * [guard](https://angular.io/cli/cli/generate#guard-command) * [interceptor](https://angular.io/cli/cli/generate#interceptor-command) * [interface](https://angular.io/cli/cli/generate#interface-command) * [library](https://angular.io/cli/cli/generate#library-command) * [module](https://angular.io/cli/cli/generate#module-command) * [pipe](https://angular.io/cli/cli/generate#pipe-command) * [resolver](https://angular.io/cli/cli/generate#resolver-command) * [service](https://angular.io/cli/cli/generate#service-command) * [service-worker](https://angular.io/cli/cli/generate#service-worker-command) * [web-worker](https://angular.io/cli/cli/generate#web-worker-command) Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `schematic` | The [collection:schematic] to run. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--defaults` | Disable interactive input prompts for options with a default. | `boolean` | `false` | | `--dry-run` | Run through and reports activity without writing out results. | `boolean` | `false` | | `--force` | Force overwriting of existing files. | `boolean` | `false` | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--interactive` | Enable interactive input prompts. | `boolean` | `true` | Generate commands ------------------ #### app-shell ### `ng generate app-shell` Generates an application shell for running a server-side version of an app. #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--app-id` | The application ID to use in withServerTransition(). | `string` | `serverApp` | | `--main` | The name of the main entry-point file. | `string` | `main.server.ts` | | `--project` | The name of the related client app. | `string` | | | `--root-module-class-name` | The name of the root module class. | `string` | `AppServerModule` | | `--root-module-file-name` | The name of the root module file | `string` | `app.server.module.ts` | | `--route` | Route path used to produce the application shell. | `string` | `shell` | #### application ### `ng generate application [name]` ### `ng generate app [name]` Generates a new basic application definition in the "projects" subfolder of the workspace. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the new application. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--inline-style` | Include styles inline in the root component.ts file. Only CSS styles can be included inline. Default is false, meaning that an external styles file is created and referenced in the root component.ts file. Aliases: -s | `boolean` | | | `--inline-template` | Include template inline in the root component.ts file. Default is false, meaning that an external template file is created and referenced in the root component.ts file. Aliases: -t | `boolean` | | | `--minimal` | Create a bare-bones project without any testing frameworks. (Use for learning purposes only.) | `boolean` | `false` | | `--prefix` | A prefix to apply to generated selectors. Aliases: -p | `string` | `app` | | `--project-root` | The root directory of the new application. | `string` | | | `--routing` | Create a routing NgModule. | `boolean` | `false` | | `--skip-install` | Skip installing dependency packages. | `boolean` | `false` | | `--skip-package-json` | Do not add dependencies to the "package.json" file. | `boolean` | `false` | | `--skip-tests` | Do not create "spec.ts" test files for the application. Aliases: -S | `boolean` | `false` | | `--strict` | Creates an application with stricter bundle budgets settings. | `boolean` | `true` | | `--style` | The file extension or preprocessor to use for style files. | `css | scss | sass | less` | `css` | | `--view-encapsulation` | The view encapsulation strategy to use in the new application. | `Emulated | None | ShadowDom` | | #### class ### `ng generate class [name]` ### `ng generate cl [name]` Creates a new, generic class definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the new class. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--project` | The name of the project. | `string` | | | `--skip-tests` | Do not create "spec.ts" test files for the new class. | `boolean` | `false` | | `--type` | Adds a developer-defined type to the filename, in the format "name.type.ts". | `string` | | #### component ### `ng generate component [name]` ### `ng generate c [name]` Creates a new, generic component definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the component. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--change-detection` | The change detection strategy to use in the new component. Aliases: -c | `Default | OnPush` | `Default` | | `--display-block` | Specifies if the style will contain `:host { display: block; }`. Aliases: -b | `boolean` | `false` | | `--export` | The declaring NgModule exports this component. | `boolean` | `false` | | `--flat` | Create the new files at the top level of the current project. | `boolean` | `false` | | `--inline-style` | Include styles inline in the component.ts file. Only CSS styles can be included inline. By default, an external styles file is created and referenced in the component.ts file. Aliases: -s | `boolean` | `false` | | `--inline-template` | Include template inline in the component.ts file. By default, an external template file is created and referenced in the component.ts file. Aliases: -t | `boolean` | `false` | | `--module` | The declaring NgModule. Aliases: -m | `string` | | | `--prefix` | The prefix to apply to the generated component selector. Aliases: -p | `string` | | | `--project` | The name of the project. | `string` | | | `--selector` | The HTML selector to use for this component. | `string` | | | `--skip-import` | Do not import this component into the owning NgModule. | `boolean` | `false` | | `--skip-selector` | Specifies if the component should have a selector or not. | `boolean` | `false` | | `--skip-tests` | Do not create "spec.ts" test files for the new component. | `boolean` | `false` | | `--standalone` | Whether the generated component is standalone. | `boolean` | `false` | | `--style` | The file extension or preprocessor to use for style files, or 'none' to skip generating the style file. | `css | scss | sass | less | none` | `css` | | `--type` | Adds a developer-defined type to the filename, in the format "name.type.ts". | `string` | `Component` | | `--view-encapsulation` | The view encapsulation strategy to use in the new component. Aliases: -v | `Emulated | None | ShadowDom` | | #### config ### `ng generate config [type]` Generates a configuration file in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `type` | Specifies which type of configuration file to create. | `karma | browserslist` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--project` | The name of the project. | `string` | | #### directive ### `ng generate directive [name]` ### `ng generate d [name]` Creates a new, generic directive definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the new directive. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--export` | The declaring NgModule exports this directive. | `boolean` | `false` | | `--flat` | When true (the default), creates the new files at the top level of the current project. | `boolean` | `true` | | `--module` | The declaring NgModule. Aliases: -m | `string` | | | `--prefix` | A prefix to apply to generated selectors. Aliases: -p | `string` | | | `--project` | The name of the project. | `string` | | | `--selector` | The HTML selector to use for this directive. | `string` | | | `--skip-import` | Do not import this directive into the owning NgModule. | `boolean` | `false` | | `--skip-tests` | Do not create "spec.ts" test files for the new class. | `boolean` | `false` | | `--standalone` | Whether the generated directive is standalone. | `boolean` | `false` | #### enum ### `ng generate enum [name]` ### `ng generate e [name]` Generates a new, generic enum definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the enum. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--project` | The name of the project in which to create the enum. Default is the configured default project for the workspace. | `string` | | | `--type` | Adds a developer-defined type to the filename, in the format "name.type.ts". | `string` | | #### environments ### `ng generate environments` Generates and configures environment files for a project. #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--project` | The name of the project. | `string` | | #### guard ### `ng generate guard [name]` ### `ng generate g [name]` Generates a new, generic route guard definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the new route guard. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--flat` | When true (the default), creates the new files at the top level of the current project. | `boolean` | `true` | | `--functional` | Specifies whether to generate a guard as a function. | `boolean` | `false` | | `--implements` | Specifies which type of guard to create. Aliases: --guardType | `array` | | | `--project` | The name of the project. | `string` | | | `--skip-tests` | Do not create "spec.ts" test files for the new guard. | `boolean` | `false` | #### interceptor ### `ng generate interceptor [name]` Creates a new, generic interceptor definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the interceptor. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--flat` | When true (the default), creates files at the top level of the project. | `boolean` | `true` | | `--functional` | Creates the interceptor as a `[HttpInterceptorFn](../api/common/http/httpinterceptorfn)`. | `boolean` | `false` | | `--project` | The name of the project. | `string` | | | `--skip-tests` | Do not create "spec.ts" test files for the new interceptor. | `boolean` | `false` | #### interface ### `ng generate interface [name] [type]` ### `ng generate i [name] [type]` Creates a new, generic interface definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the interface. | `string` | | `type` | Adds a developer-defined type to the filename, in the format "name.type.ts". | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--prefix` | A prefix to apply to generated selectors. | `string` | | | `--project` | The name of the project. | `string` | | #### library ### `ng generate library [name]` ### `ng generate lib [name]` Creates a new, generic library project in the current workspace. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the library. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--entry-file` | The path at which to create the library's public API file, relative to the workspace root. | `string` | `public-api` | | `--prefix` | A prefix to apply to generated selectors. Aliases: -p | `string` | `lib` | | `--project-root` | The root directory of the new library. | `string` | | | `--skip-install` | Do not install dependency packages. | `boolean` | `false` | | `--skip-package-json` | Do not add dependencies to the "package.json" file. | `boolean` | `false` | | `--skip-ts-config` | Do not update "tsconfig.json" to add a path mapping for the new library. The path mapping is needed to use the library in an app, but can be disabled here to simplify development. | `boolean` | `false` | #### module ### `ng generate module [name]` ### `ng generate m [name]` Creates a new, generic NgModule definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the NgModule. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--flat` | Create the new files at the top level of the current project root. | `boolean` | `false` | | `--module` | The declaring NgModule. Aliases: -m | `string` | | | `--project` | The name of the project. | `string` | | | `--route` | The route path for a lazy-loaded module. When supplied, creates a component in the new module, and adds the route to that component in the `[Routes](../api/router/routes)` array declared in the module provided in the `--module` option. | `string` | | | `--routing` | Create a routing module. | `boolean` | `false` | | `--routing-scope` | The scope for the new routing module. | `Child | Root` | `Child` | #### pipe ### `ng generate pipe [name]` ### `ng generate p [name]` Creates a new, generic pipe definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the pipe. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--export` | The declaring NgModule exports this pipe. | `boolean` | `false` | | `--flat` | When true (the default) creates files at the top level of the project. | `boolean` | `true` | | `--module` | The declaring NgModule. Aliases: -m | `string` | | | `--project` | The name of the project. | `string` | | | `--skip-import` | Do not import this pipe into the owning NgModule. | `boolean` | `false` | | `--skip-tests` | Do not create "spec.ts" test files for the new pipe. | `boolean` | `false` | | `--standalone` | Whether the generated pipe is standalone. | `boolean` | `false` | #### resolver ### `ng generate resolver [name]` ### `ng generate r [name]` Generates a new, generic resolver definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the new resolver. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--flat` | When true (the default), creates the new files at the top level of the current project. | `boolean` | `true` | | `--functional` | Creates the resolver as a `[ResolveFn](../api/router/resolvefn)`. | `boolean` | `false` | | `--project` | The name of the project. | `string` | | | `--skip-tests` | Do not create "spec.ts" test files for the new resolver. | `boolean` | `false` | #### service ### `ng generate service [name]` ### `ng generate s [name]` Creates a new, generic service definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the service. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--flat` | When true (the default), creates files at the top level of the project. | `boolean` | `true` | | `--project` | The name of the project. | `string` | | | `--skip-tests` | Do not create "spec.ts" test files for the new service. | `boolean` | `false` | #### service-worker ### `ng generate service-worker` Pass this schematic to the "run" command to create a service worker #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--project` | The name of the project. | `string` | | | `--target` | The target to apply service worker to. | `string` | `build` | #### web-worker ### `ng generate web-worker [name]` Creates a new, generic web worker definition in the given project. #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the worker. | `string` | #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--project` | The name of the project. | `string` | | | `--snippet` | Add a worker creation snippet in a sibling file of the same name. | `boolean` | `true` |
programming_docs
angular ng e2e Builds and serves an Angular application, then runs end-to-end tests. ### `ng e2e [project]` ### `ng e [project]` #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `project` | The name of the project to build. Can be an application or a library. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--configuration` | One or more named builder configurations as a comma-separated list as specified in the "configurations" section in angular.json. The builder uses the named configurations to run the given target. For more information, see [https://angular.io/guide/workspace-config#alternate-build-configurations](../guide/workspace-config#alternate-build-configurations). Aliases: -c | `string` | | | `--help` | Shows a help message for this command in the console. | `boolean` | | angular ng test Runs unit tests in a project. ### `ng test [project]` ### `ng t [project]` #### Description Takes the name of the project, as specified in the `projects` section of the `angular.json` workspace configuration file. When a project name is not supplied, it will execute for all projects. Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `project` | The name of the project to build. Can be an application or a library. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--browsers` | Override which browsers tests are run against. | `string` | | | `--code-coverage` | Output a code coverage report. | `boolean` | `false` | | `--code-coverage-exclude` | Globs to exclude from code coverage. | `array` | | | `--configuration` | One or more named builder configurations as a comma-separated list as specified in the "configurations" section in angular.json. The builder uses the named configurations to run the given target. For more information, see [https://angular.io/guide/workspace-config#alternate-build-configurations](../guide/workspace-config#alternate-build-configurations). Aliases: -c | `string` | | | `--exclude` | Globs of files to exclude, relative to the project root. | `array` | | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--include` | Globs of files to include, relative to project root. There are 2 special cases:* when a path to directory is provided, all spec files ending ".spec.@(ts|tsx)" will be included * when a path to a file is provided, and a matching spec file exists it will be included instead. | `array` | | | `--inline-style-language` | The stylesheet language to use for the application's inline component styles. | `css | less | sass | scss` | `css` | | `--karma-config` | The name of the Karma configuration file. | `string` | | | `--main` | The name of the main entry-point file. | `string` | | | `--poll` | Enable and define the file watching poll time period in milliseconds. | `number` | | | `--polyfills` | Polyfills to be included in the build. | `string` | | | `--preserve-symlinks` | Do not use the real path when resolving modules. If unset then will default to `true` if NodeJS option --preserve-symlinks is set. | `boolean` | | | `--progress` | Log progress to the console while building. | `boolean` | `true` | | `--reporters` | Karma reporters to use. Directly passed to the karma runner. | `array` | | | `--source-map` | Output source maps for scripts and styles. For more information, see [https://angular.io/guide/workspace-config#source-map-configuration](../guide/workspace-config#source-map-configuration). | `boolean` | `true` | | `--ts-config` | The name of the TypeScript configuration file. | `string` | | | `--watch` | Run build when files change. | `boolean` | | | `--web-worker-ts-config` | TypeScript configuration for Web Worker modules. | `string` | | angular ng config Retrieves or sets Angular configuration values in the angular.json file for the workspace. ### `ng config [json-path] [value]` #### Description A workspace has a single CLI configuration file, `angular.json`, at the top level. The `projects` object contains a configuration object for each project in the workspace. You can edit the configuration directly in a code editor, or indirectly on the command line using this command. The configurable property names match command option names, except that in the configuration file, all names must use camelCase, while on the command line options can be given dash-case. For further details, see [Workspace Configuration](../guide/workspace-config). For configuration of CLI usage analytics, see [ng analytics](https://angular.io/cli/cli/analytics). Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `json-path` | The configuration key to set or query, in JSON path format. For example: "a[3].foo.bar[2]". If no new value is provided, returns the current value of this key. | `string` | | `value` | If provided, a new value for the given configuration key. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--global` | Access the global configuration in the caller's home directory. Aliases: -g | `boolean` | `false` | | `--help` | Shows a help message for this command in the console. | `boolean` | | angular ng deploy Invokes the deploy builder for a specified project or for the default project in the workspace. ### `ng deploy [project]` #### Description The command takes an optional project name, as specified in the `projects` section of the `angular.json` workspace configuration file. When a project name is not supplied, executes the `deploy` builder for the default project. To use the `ng deploy` command, use `ng add` to add a package that implements deployment capabilities to your favorite platform. Adding the package automatically updates your workspace configuration, adding a deployment [CLI builder](../guide/cli-builder). For example: ``` "projects": { "my-project": { ... "architect": { ... "deploy": { "builder": "@angular/fire:deploy", "options": {} } } } } ``` Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `project` | The name of the project to build. Can be an application or a library. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--configuration` | One or more named builder configurations as a comma-separated list as specified in the "configurations" section in angular.json. The builder uses the named configurations to run the given target. For more information, see [https://angular.io/guide/workspace-config#alternate-build-configurations](../guide/workspace-config#alternate-build-configurations). Aliases: -c | `string` | | | `--help` | Shows a help message for this command in the console. | `boolean` | | angular ng analytics Configures the gathering of Angular CLI usage metrics. ### `ng analytics` #### Description You can help the Angular Team to prioritize features and improvements by permitting the Angular team to send command-line command usage statistics to Google. The Angular Team does not collect usage statistics unless you explicitly opt in. When installing the Angular CLI you are prompted to allow global collection of usage statistics. If you say no or skip the prompt, no data is collected. #### What is collected? Usage analytics include the commands and selected flags for each execution. Usage analytics may include the following information: * Your operating system (macOS, Linux distribution, Windows) and its version. * Package manager name and version (local version only). * Node.js version (local version only). * Angular CLI version (local version only). * Command name that was run. * Workspace information, the number of application and library projects. * For schematics commands (add, generate and new), the schematic collection and name and a list of selected flags. * For build commands (build, serve), the builder name, the number and size of bundles (initial and lazy), compilation units, the time it took to build and rebuild, and basic Angular-specific API usage. Only Angular owned and developed schematics and builders are reported. Third-party schematics and builders do not send data to the Angular Team. This command has the following [sub-commands](https://angular.io/cli/cli/analytics#analytics-commands): * [disable](https://angular.io/cli/cli/analytics#disable-command) * [enable](https://angular.io/cli/cli/analytics#enable-command) * [info](https://angular.io/cli/cli/analytics#info-command) * [prompt](https://angular.io/cli/cli/analytics#prompt-command) Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--help` | Shows a help message for this command in the console. | `boolean` | | Analytics commands ------------------- #### disable ### `ng analytics disable` ### `ng analytics off` Disables analytics gathering and reporting for the user. #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--global` | Configure analytics gathering and reporting globally in the caller's home directory. Aliases: -g | `boolean` | `false` | #### enable ### `ng analytics enable` ### `ng analytics on` Enables analytics gathering and reporting for the user. #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--global` | Configure analytics gathering and reporting globally in the caller's home directory. Aliases: -g | `boolean` | `false` | #### info ### `ng analytics info` Prints analytics gathering and reporting configuration in the console. #### prompt ### `ng analytics prompt` Prompts the user to set the analytics gathering status interactively. #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--global` | Configure analytics gathering and reporting globally in the caller's home directory. Aliases: -g | `boolean` | `false` | angular ng serve Builds and serves your application, rebuilding on file changes. ### `ng serve [project]` ### `ng s [project]` #### Arguments | Argument | Description | Value Type | | --- | --- | --- | | `project` | The name of the project to build. Can be an application or a library. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--allowed-hosts` | List of hosts that are allowed to access the dev server. | `array` | | | `--browser-target` | A browser builder target to serve in the format of `project:target[:configuration]`. You can also pass in more than one configuration name as a comma-separated list. Example: `project:target:production,staging`. | `string` | | | `--configuration` | One or more named builder configurations as a comma-separated list as specified in the "configurations" section in angular.json. The builder uses the named configurations to run the given target. For more information, see [https://angular.io/guide/workspace-config#alternate-build-configurations](../guide/workspace-config#alternate-build-configurations). Aliases: -c | `string` | | | `--disable-host-check` | Don't verify connected clients are part of allowed hosts. | `boolean` | `false` | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--hmr` | Enable hot module replacement. | `boolean` | `false` | | `--host` | Host to listen on. | `string` | `localhost` | | `--live-reload` | Whether to reload the page on change, using live-reload. | `boolean` | `true` | | `--open` | Opens the url in default browser. Aliases: -o | `boolean` | `false` | | `--poll` | Enable and define the file watching poll time period in milliseconds. | `number` | | | `--port` | Port to listen on. | `number` | `4200` | | `--proxy-config` | Proxy configuration file. For more information, see [https://angular.io/guide/build#proxying-to-a-backend-server](../guide/build#proxying-to-a-backend-server). | `string` | | | `--public-host` | The URL that the browser client (or live-reload client, if enabled) should use to connect to the development server. Use for a complex dev server setup, such as one with reverse proxies. | `string` | | | `--serve-path` | The pathname where the application will be served. | `string` | | | `--ssl` | Serve using HTTPS. | `boolean` | `false` | | `--ssl-cert` | SSL certificate to use for serving HTTPS. | `string` | | | `--ssl-key` | SSL key to use for serving HTTPS. | `string` | | | `--verbose` | Adds more details to output logging. | `boolean` | | | `--watch` | Rebuild on change. | `boolean` | `true` | angular ng version Outputs Angular CLI version. ### `ng version` ### `ng v` #### Options | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--help` | Shows a help message for this command in the console. | `boolean` | | angular ng new Creates a new Angular workspace. ### `ng new [name]` ### `ng n [name]` #### Description Creates and initializes a new Angular application that is the default project for a new workspace. Provides interactive prompts for optional configuration, such as adding routing support. All prompts can safely be allowed to default. * The new workspace folder is given the specified project name, and contains configuration files at the top level. * By default, the files for a new initial application (with the same name as the workspace) are placed in the `src/` subfolder. * The new application's configuration appears in the `projects` section of the `angular.json` workspace configuration file, under its project name. * Subsequent applications that you generate in the workspace reside in the `projects/` subfolder. If you plan to have multiple applications in the workspace, you can create an empty workspace by using the `--no-create-application` option. You can then use `ng generate application` to create an initial application. This allows a workspace name different from the initial app name, and ensures that all applications reside in the `/projects` subfolder, matching the structure of the configuration file. Arguments --------- | Argument | Description | Value Type | | --- | --- | --- | | `name` | The name of the new workspace and initial project. | `string` | Options ------- | Option | Description | Value Type | Default Value | | --- | --- | --- | --- | | `--collection` | A collection of schematics to use in generating the initial application. Aliases: -c | `string` | | | `--commit` | Initial git repository commit information. | `boolean` | `true` | | `--create-application` | Create a new initial application project in the 'src' folder of the new workspace. When false, creates an empty workspace with no initial application. You can then use the generate application command so that all applications are created in the projects folder. | `boolean` | `true` | | `--defaults` | Disable interactive input prompts for options with a default. | `boolean` | `false` | | `--directory` | The directory name to create the workspace in. | `string` | | | `--dry-run` | Run through and reports activity without writing out results. | `boolean` | `false` | | `--force` | Force overwriting of existing files. | `boolean` | `false` | | `--help` | Shows a help message for this command in the console. | `boolean` | | | `--inline-style` | Include styles inline in the component TS file. By default, an external styles file is created and referenced in the component TypeScript file. Aliases: -s | `boolean` | | | `--inline-template` | Include template inline in the component TS file. By default, an external template file is created and referenced in the component TypeScript file. Aliases: -t | `boolean` | | | `--interactive` | Enable interactive input prompts. | `boolean` | `true` | | `--minimal` | Create a workspace without any testing frameworks. (Use for learning purposes only.) | `boolean` | `false` | | `--new-project-root` | The path where new projects will be created, relative to the new workspace root. | `string` | `projects` | | `--package-manager` | The package manager used to install dependencies. | `npm | yarn | pnpm | cnpm` | | | `--prefix` | The prefix to apply to generated selectors for the initial project. Aliases: -p | `string` | `app` | | `--routing` | Generate a routing module for the initial project. | `boolean` | | | `--skip-git` | Do not initialize a git repository. Aliases: -g | `boolean` | `false` | | `--skip-install` | Do not install dependency packages. | `boolean` | `false` | | `--skip-tests` | Do not generate "spec.ts" test files for the new project. Aliases: -S | `boolean` | `false` | | `--strict` | Creates a workspace with stricter type checking and stricter bundle budgets settings. This setting helps improve maintainability and catch bugs ahead of time. For more information, see [https://angular.io/guide/strict-mode](../guide/strict-mode) | `boolean` | `true` | | `--style` | The file extension or preprocessor to use for style files. | `css | scss | sass | less` | | | `--view-encapsulation` | The view encapsulation strategy to use in the initial project. | `Emulated | None | ShadowDom` | | angular @angular/router @angular/router =============== `package` Implements the Angular Router service , which enables navigation from one view to the next as users perform application tasks. Defines the `[Route](router/route)` object that maps a URL path to a component, and the `[RouterOutlet](router/routeroutlet)` directive that you use to place a routed view in a template, as well as a complete API for configuring, querying, and controlling the router state. Import `[RouterModule](router/routermodule)` to use the Router service in your app. For more usage information, see the [Routing and Navigation](../guide/router) guide. Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/router](router#primary-entry-point-exports)` | Implements the Angular Router service , which enables navigation from one view to the next as users perform application tasks. | ### Secondary | | | | --- | --- | | `[@angular/router/testing](router/testing)` | Supplies a testing module for the Angular `[Router](router/router)` subsystem. | | `[@angular/router/upgrade](router/upgrade)` | Provides support for upgrading routing applications from Angular JS to Angular. | Primary entry point exports --------------------------- ### NgModules | | | | --- | --- | | `[RouterModule](router/routermodule)` | Adds directives and providers for in-app navigation among views defined in an application. Use the Angular `[Router](router/router)` service to declaratively specify application states and manage state transitions. | ### Classes | | | | --- | --- | | `[ActivatedRoute](router/activatedroute)` | Provides access to information about a route associated with a component that is loaded in an outlet. Use to traverse the `[RouterState](router/routerstate)` tree and extract information from nodes. | | `[ActivatedRouteSnapshot](router/activatedroutesnapshot)` | Contains the information about a route associated with a component loaded in an outlet at a particular moment in time. ActivatedRouteSnapshot can also be used to traverse the router state tree. | | `[ActivationEnd](router/activationend)` | An event triggered at the end of the activation part of the Resolve phase of routing. | | `[ActivationStart](router/activationstart)` | An event triggered at the start of the activation part of the Resolve phase of routing. | | `[BaseRouteReuseStrategy](router/baseroutereusestrategy)` | This base route reuse strategy only reuses routes when the matched router configs are identical. This prevents components from being destroyed and recreated when just the route parameters, query parameters or fragment change (that is, the existing component is *reused*). | | `[ChildActivationEnd](router/childactivationend)` | An event triggered at the end of the child-activation part of the Resolve phase of routing. | | `[ChildActivationStart](router/childactivationstart)` | An event triggered at the start of the child-activation part of the Resolve phase of routing. | | `[ChildrenOutletContexts](router/childrenoutletcontexts)` | Store contextual information about the children (= nested) `[RouterOutlet](router/routeroutlet)` | | `[DefaultTitleStrategy](router/defaulttitlestrategy)` | The default `[TitleStrategy](router/titlestrategy)` used by the router that updates the title using the `[Title](platform-browser/title)` service. | | `[DefaultUrlSerializer](router/defaulturlserializer)` | A default implementation of the `[UrlSerializer](router/urlserializer)`. | | `[GuardsCheckEnd](router/guardscheckend)` | An event triggered at the end of the Guard phase of routing. | | `[GuardsCheckStart](router/guardscheckstart)` | An event triggered at the start of the Guard phase of routing. | | `[NavigationCancel](router/navigationcancel)` | An event triggered when a navigation is canceled, directly or indirectly. This can happen for several reasons including when a route guard returns `false` or initiates a redirect by returning a `[UrlTree](router/urltree)`. | | `[NavigationEnd](router/navigationend)` | An event triggered when a navigation ends successfully. | | `[NavigationError](router/navigationerror)` | An event triggered when a navigation fails due to an unexpected error. | | `[NavigationSkipped](router/navigationskipped)` | An event triggered when a navigation is skipped. This can happen for a couple reasons including onSameUrlHandling is set to `ignore` and the navigation URL is not different than the current state. | | `[NavigationStart](router/navigationstart)` | An event triggered when a navigation starts. | | `[NoPreloading](router/nopreloading)` | Provides a preloading strategy that does not preload any modules. | | `[OutletContext](router/outletcontext)` | Store contextual information about a `[RouterOutlet](router/routeroutlet)` | | `[PreloadAllModules](router/preloadallmodules)` | Provides a preloading strategy that preloads all modules as quickly as possible. | | `[PreloadingStrategy](router/preloadingstrategy)` | Provides a preloading strategy. | | `[ResolveEnd](router/resolveend)` | An event triggered at the end of the Resolve phase of routing. | | `[ResolveStart](router/resolvestart)` | An event triggered at the start of the Resolve phase of routing. | | `[RouteConfigLoadEnd](router/routeconfigloadend)` | An event triggered when a route has been lazy loaded. | | `[RouteConfigLoadStart](router/routeconfigloadstart)` | An event triggered before lazy loading a route configuration. | | `[RouteReuseStrategy](router/routereusestrategy)` | Provides a way to customize when activated routes get reused. | | `[Router](router/router)` | A service that provides navigation among views and URL manipulation capabilities. | | `[RouterEvent](router/routerevent)` | Base for events the router goes through, as opposed to events tied to a specific route. Fired one time for any given navigation. | | `[RouterPreloader](router/routerpreloader)` | The preloader optimistically loads all router configurations to make navigations into lazily-loaded sections of the application faster. | | `[RouterState](router/routerstate)` | Represents the state of the router as a tree of activated routes. | | `[RouterStateSnapshot](router/routerstatesnapshot)` | Represents the state of the router at a moment in time. | | `[RoutesRecognized](router/routesrecognized)` | An event triggered when routes are recognized. | | `[Scroll](router/scroll)` | An event triggered by scrolling. | | `[TitleStrategy](router/titlestrategy)` | Provides a strategy for setting the page title after a router navigation. | | `[UrlHandlingStrategy](router/urlhandlingstrategy)` | Provides a way to migrate AngularJS applications to Angular. | | `[UrlSegment](router/urlsegment)` | Represents a single URL segment. | | `[UrlSegmentGroup](router/urlsegmentgroup)` | Represents the parsed URL segment group. | | `[UrlSerializer](router/urlserializer)` | Serializes and deserializes a URL string into a URL tree. | | `[UrlTree](router/urltree)` | Represents the parsed URL. | ### Functions | | | | --- | --- | | `[convertToParamMap](router/converttoparammap)` | Converts a `[Params](router/params)` instance to a `[ParamMap](router/parammap)`. | | `[createUrlTreeFromSnapshot](router/createurltreefromsnapshot)` | Creates a `[UrlTree](router/urltree)` relative to an `[ActivatedRouteSnapshot](router/activatedroutesnapshot)`. | | `[defaultUrlMatcher](router/defaulturlmatcher)` | Matches the route configuration (`route`) against the actual URL (`segments`). | | `[provideRouter](router/providerouter)` | Sets up providers necessary to enable `[Router](router/router)` functionality for the application. Allows to configure a set of routes as well as extra features that should be enabled. | | `[provideRoutes](router/provideroutes)` | **Deprecated:** If necessary, provide routes using the `[ROUTES](router/routes)` `[InjectionToken](core/injectiontoken)`. Registers a [DI provider](../guide/glossary#provider) for a set of routes. | | `[withDebugTracing](router/withdebugtracing)` | Enables logging of all internal navigation events to the console. Extra logging might be useful for debugging purposes to inspect Router event sequence. | | `[withDisabledInitialNavigation](router/withdisabledinitialnavigation)` | Disables initial navigation. | | `[withEnabledBlockingInitialNavigation](router/withenabledblockinginitialnavigation)` | Configures initial navigation to start before the root component is created. | | `[withHashLocation](router/withhashlocation)` | Provides the location strategy that uses the URL fragment instead of the history API. | | `[withInMemoryScrolling](router/withinmemoryscrolling)` | Enables customizable scrolling behavior for router navigations. | | `[withPreloading](router/withpreloading)` | Allows to configure a preloading strategy to use. The strategy is configured by providing a reference to a class that implements a `[PreloadingStrategy](router/preloadingstrategy)`. | | `[withRouterConfig](router/withrouterconfig)` | Allows to provide extra parameters to configure Router. | ### Structures | | | | --- | --- | | `[CanActivate](router/canactivate)` | Interface that a class can implement to be a guard deciding if a route can be activated. If all guards return `true`, navigation continues. If any guard returns `false`, navigation is cancelled. If any guard returns a `[UrlTree](router/urltree)`, the current navigation is cancelled and a new navigation begins to the `[UrlTree](router/urltree)` returned from the guard. | | `[CanActivateChild](router/canactivatechild)` | Interface that a class can implement to be a guard deciding if a child route can be activated. If all guards return `true`, navigation continues. If any guard returns `false`, navigation is cancelled. If any guard returns a `[UrlTree](router/urltree)`, current navigation is cancelled and a new navigation begins to the `[UrlTree](router/urltree)` returned from the guard. | | `[CanDeactivate](router/candeactivate)` | Interface that a class can implement to be a guard deciding if a route can be deactivated. If all guards return `true`, navigation continues. If any guard returns `false`, navigation is cancelled. If any guard returns a `[UrlTree](router/urltree)`, current navigation is cancelled and a new navigation begins to the `[UrlTree](router/urltree)` returned from the guard. | | `[CanLoad](router/canload)` | **Deprecated:** Use `[CanMatch](router/canmatch)` instead Interface that a class can implement to be a guard deciding if children can be loaded. If all guards return `true`, navigation continues. If any guard returns `false`, navigation is cancelled. If any guard returns a `[UrlTree](router/urltree)`, current navigation is cancelled and a new navigation starts to the `[UrlTree](router/urltree)` returned from the guard. | | `[CanMatch](router/canmatch)` | Interface that a class can implement to be a guard deciding if a `[Route](router/route)` can be matched. If all guards return `true`, navigation continues and the `[Router](router/router)` will use the `[Route](router/route)` during activation. If any guard returns `false`, the `[Route](router/route)` is skipped for matching and other `[Route](router/route)` configurations are processed instead. | | `[DefaultExport](router/defaultexport)` | An ES Module object with a default export of the given type. | | `[EventType](router/eventtype)` | Identifies the type of a router event. | | `[ExtraOptions](router/extraoptions)` | A set of configuration options for a router module, provided in the `forRoot()` method. | | `[InMemoryScrollingOptions](router/inmemoryscrollingoptions)` | Configuration options for the scrolling feature which can be used with `[withInMemoryScrolling](router/withinmemoryscrolling)` function. | | `[IsActiveMatchOptions](router/isactivematchoptions)` | A set of options which specify how to determine if a `[UrlTree](router/urltree)` is active, given the `[UrlTree](router/urltree)` for the current router state. | | `[Navigation](router/navigation)` | Information about a navigation operation. Retrieve the most recent navigation object with the [Router.getCurrentNavigation() method](router/router#getcurrentnavigation) . | | `[NavigationBehaviorOptions](router/navigationbehavioroptions)` | Options that modify the `[Router](router/router)` navigation strategy. Supply an object containing any of these properties to a `[Router](router/router)` navigation function to control how the navigation should be handled. | | `[NavigationCancellationCode](router/navigationcancellationcode)` | A code for the `[NavigationCancel](router/navigationcancel)` event of the `[Router](router/router)` to indicate the reason a navigation failed. | | `[NavigationExtras](router/navigationextras)` | Options that modify the `[Router](router/router)` navigation strategy. Supply an object containing any of these properties to a `[Router](router/router)` navigation function to control how the target URL should be constructed or interpreted. | | `[NavigationSkippedCode](router/navigationskippedcode)` | A code for the `[NavigationSkipped](router/navigationskipped)` event of the `[Router](router/router)` to indicate the reason a navigation was skipped. | | `[ParamMap](router/parammap)` | A map that provides access to the required and optional parameters specific to a route. The map supports retrieving a single value with `get()` or multiple values with `getAll()`. | | `[Resolve](router/resolve)` | Interface that classes can implement to be a data provider. A data provider class can be used with the router to resolve data during navigation. The interface defines a `resolve()` method that is invoked right after the `[ResolveStart](router/resolvestart)` router event. The router waits for the data to be resolved before the route is finally activated. | | `[Route](router/route)` | A configuration object that defines a single route. A set of routes are collected in a `[Routes](router/routes)` array to define a `[Router](router/router)` configuration. The router attempts to match segments of a given URL against each route, using the configuration options defined in this object. | | `[RouterConfigOptions](router/routerconfigoptions)` | Extra configuration options that can be used with the `[withRouterConfig](router/withrouterconfig)` function. | | `[RouterFeature](router/routerfeature)` | Helper type to represent a Router feature. | | `[RouterOutletContract](router/routeroutletcontract)` | An interface that defines the contract for developing a component outlet for the `[Router](router/router)`. | | `[UrlCreationOptions](router/urlcreationoptions)` | Options that modify the `[Router](router/router)` URL. Supply an object containing any of these properties to a `[Router](router/router)` navigation function to control how the target URL should be constructed. | ### Directives | | | | --- | --- | | `[RouterLink](router/routerlink)` | When applied to an element in a template, makes that element a link that initiates navigation to a route. Navigation opens one or more routed components in one or more `<[router-outlet](router/routeroutlet)>` locations on the page. | | `[RouterLinkActive](router/routerlinkactive)` | Tracks whether the linked route of an element is currently active, and allows you to specify one or more CSS classes to add to the element when the linked route is active. | | `[RouterLinkWithHref](router/routerlinkwithhref)` | When applied to an element in a template, makes that element a link that initiates navigation to a route. Navigation opens one or more routed components in one or more `<[router-outlet](router/routeroutlet)>` locations on the page. | | `[RouterOutlet](router/routeroutlet)` | Acts as a placeholder that Angular dynamically fills based on the current router state. | ### Types | | | | --- | --- | | `[CanActivateChildFn](router/canactivatechildfn)` | The signature of a function used as a `canActivateChild` guard on a `[Route](router/route)`. | | `[CanActivateFn](router/canactivatefn)` | The signature of a function used as a `canActivate` guard on a `[Route](router/route)`. | | `[CanDeactivateFn](router/candeactivatefn)` | The signature of a function used as a `canDeactivate` guard on a `[Route](router/route)`. | | `[CanLoadFn](router/canloadfn)` | **Deprecated:** Use `[Route.canMatch](router/route#canMatch)` and `[CanMatchFn](router/canmatchfn)` instead The signature of a function used as a `canLoad` guard on a `[Route](router/route)`. | | `[CanMatchFn](router/canmatchfn)` | The signature of a function used as a `[CanMatch](router/canmatch)` guard on a `[Route](router/route)`. | | `[Data](router/data)` | Represents static data associated with a particular route. | | `[DebugTracingFeature](router/debugtracingfeature)` | A type alias for providers returned by `[withDebugTracing](router/withdebugtracing)` for use with `[provideRouter](router/providerouter)`. | | `[DetachedRouteHandle](router/detachedroutehandle)` | Represents the detached route tree. | | `[DisabledInitialNavigationFeature](router/disabledinitialnavigationfeature)` | A type alias for providers returned by `[withDisabledInitialNavigation](router/withdisabledinitialnavigation)` for use with `[provideRouter](router/providerouter)`. | | `[EnabledBlockingInitialNavigationFeature](router/enabledblockinginitialnavigationfeature)` | A type alias for providers returned by `[withEnabledBlockingInitialNavigation](router/withenabledblockinginitialnavigation)` for use with `[provideRouter](router/providerouter)`. | | `[Event](router/event)` | Router events that allow you to track the lifecycle of the router. | | `[InMemoryScrollingFeature](router/inmemoryscrollingfeature)` | A type alias for providers returned by `[withInMemoryScrolling](router/withinmemoryscrolling)` for use with `[provideRouter](router/providerouter)`. | | `[InitialNavigation](router/initialnavigation)` | Allowed values in an `[ExtraOptions](router/extraoptions)` object that configure when the router performs the initial navigation operation. | | `[InitialNavigationFeature](router/initialnavigationfeature)` | A type alias for providers returned by `[withEnabledBlockingInitialNavigation](router/withenabledblockinginitialnavigation)` or `[withDisabledInitialNavigation](router/withdisabledinitialnavigation)` functions for use with `[provideRouter](router/providerouter)`. | | `[LoadChildren](router/loadchildren)` | A function that returns a set of routes to load. | | `[LoadChildrenCallback](router/loadchildrencallback)` | A function that is called to resolve a collection of lazy-loaded routes. Must be an arrow function of the following form: `() => import('...').then(mod => mod.MODULE)` or `() => import('...').then(mod => mod.ROUTES)` | | `[OnSameUrlNavigation](router/onsameurlnavigation)` | How to handle a navigation request to the current URL. One of: | | `[PRIMARY\_OUTLET](router/primary_outlet)` | The primary routing outlet. | | `[Params](router/params)` | A collection of matrix and query URL parameters. | | `[PreloadingFeature](router/preloadingfeature)` | A type alias that represents a feature which enables preloading in Router. The type is used to describe the return value of the `[withPreloading](router/withpreloading)` function. | | `[QueryParamsHandling](router/queryparamshandling)` | How to handle query parameters in a router link. One of:* `"merge"` : Merge new parameters with current parameters. * `"preserve"` : Preserve current parameters. * `""` : Replace current parameters with new parameters. This is the default behavior. | | `[ROUTER\_CONFIGURATION](router/router_configuration)` | A [DI token](../guide/glossary/index#di-token) for the router service. | | `[ROUTER\_INITIALIZER](router/router_initializer)` | A [DI token](../guide/glossary/index#di-token) for the router initializer that is called after the app is bootstrapped. | | `[ROUTES](router/routes)` | The [DI token](../guide/glossary/index#di-token) for a router configuration. | | `[ResolveData](router/resolvedata)` | Represents the resolved data associated with a particular route. | | `[ResolveFn](router/resolvefn)` | Function type definition for a data provider. | | `[RouterConfigurationFeature](router/routerconfigurationfeature)` | A type alias for providers returned by `[withRouterConfig](router/withrouterconfig)` for use with `[provideRouter](router/providerouter)`. | | `[RouterFeatures](router/routerfeatures)` | A type alias that represents all Router features available for use with `[provideRouter](router/providerouter)`. Features can be enabled by adding special functions to the `[provideRouter](router/providerouter)` call. See documentation for each symbol to find corresponding function name. See also `[provideRouter](router/providerouter)` documentation on how to use those functions. | | `[RouterHashLocationFeature](router/routerhashlocationfeature)` | A type alias for providers returned by `[withHashLocation](router/withhashlocation)` for use with `[provideRouter](router/providerouter)`. | | `[Routes](router/routes)` | Represents a route configuration for the Router service. An array of `[Route](router/route)` objects, used in `[Router.config](router/router#config)` and for nested route configurations in `[Route.children](router/route#children)`. | | `[RunGuardsAndResolvers](router/runguardsandresolvers)` | A policy for when to run guards and resolvers on a route. | | `[UrlMatchResult](router/urlmatchresult)` | Represents the result of matching URLs with a custom matching function. | | `[UrlMatcher](router/urlmatcher)` | A function for matching a route against URLs. Implement a custom URL matcher for `[Route.matcher](router/route#matcher)` when a combination of `path` and `pathMatch` is not expressive enough. Cannot be used together with `path` and `pathMatch`. |
programming_docs
angular @angular/platform-server @angular/platform-server ======================== `package` Supports delivery of Angular apps on a server, for use with [server-side rendering](../guide/glossary#server-side-rendering) (SSR). For more information, see [Server-side Rendering: An intro to Angular Universal](../guide/universal). Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/platform-server](platform-server#primary-entry-point-exports)` | Supports delivery of Angular apps on a server, for use with [server-side rendering](../guide/glossary#server-side-rendering) (SSR). | ### Secondary | | | | --- | --- | | `[@angular/platform-server/init](platform-server/init)` | Initializes the server environment for rendering an Angular application. | | `[@angular/platform-server/testing](platform-server/testing)` | Supplies a testing module for the Angular platform server subsystem. | Primary entry point exports --------------------------- ### NgModules | | | | --- | --- | | `[ServerModule](platform-server/servermodule)` | The ng module for the server. | | `[ServerTransferStateModule](platform-server/servertransferstatemodule)` | **Deprecated:** no longer needed, you can inject the `[TransferState](platform-browser/transferstate)` in an app without providing this module. NgModule to install on the server side while using the `[TransferState](platform-browser/transferstate)` to transfer state from server to client. | ### Classes | | | | --- | --- | | `[PlatformState](platform-server/platformstate)` | Representation of the current platform state. | ### Functions | | | | --- | --- | | `[renderApplication](platform-server/renderapplication)` | Bootstraps an instance of an Angular application and renders it to a string. | | `[renderModule](platform-server/rendermodule)` | Bootstraps an application using provided NgModule and serializes the page content to string. | | `[renderModuleFactory](platform-server/rendermodulefactory)` | **Deprecated:** This symbol is no longer necessary as of Angular v13. Use [`renderModule`](platform-server/rendermodule) API instead. Bootstraps an application using provided [`NgModuleFactory`](core/ngmodulefactory) and serializes the page content to string. | ### Structures | | | | --- | --- | | `[PlatformConfig](platform-server/platformconfig)` | Config object passed to initialize the platform. | ### Types | | | | --- | --- | | `[BEFORE\_APP\_SERIALIZED](platform-server/before_app_serialized)` | A function that will be executed when calling `[renderApplication](platform-server/renderapplication)`, `[renderModuleFactory](platform-server/rendermodulefactory)` or `[renderModule](platform-server/rendermodule)` just before current platform state is rendered to string. | | `[INITIAL\_CONFIG](platform-server/initial_config)` | The DI token for setting the initial config for the platform. | | `[platformDynamicServer](platform-server/platformdynamicserver)` | The server platform that supports the runtime compiler. | | `[platformServer](platform-server/platformserver)` | | angular @angular/forms @angular/forms ============== `package` Implements a set of directives and providers to communicate with native DOM elements when building forms to capture user input. Use this API to register directives, build form and data models, and provide validation to your forms. Validators can be synchronous or asynchronous depending on your use case. You can also extend the built-in functionality provided by forms in Angular by using the interfaces and tokens to create custom validators and input elements. Angular forms allow you to: * Capture the current value and validation status of a form. * Track and listen for changes to the form's data model. * Validate the correctness of user input. * Create custom validators and input elements. You can build forms in one of two ways: * *Reactive forms* use existing instances of a `[FormControl](forms/formcontrol)` or `[FormGroup](forms/formgroup)` to build a form model. This form model is synced with form input elements through directives to track and communicate changes back to the form model. Changes to the value and status of the controls are provided as observables. * *Template-driven forms* rely on directives such as `[NgModel](forms/ngmodel)` and `[NgModelGroup](forms/ngmodelgroup)` create the form model for you, so any changes to the form are communicated through the template. See also -------- * Find out more in the [Forms Overview](../guide/forms-overview). Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/forms](forms#primary-entry-point-exports)` | Implements a set of directives and providers to communicate with native DOM elements when building forms to capture user input. | Primary entry point exports --------------------------- ### NgModules | | | | --- | --- | | `[FormsModule](forms/formsmodule)` | Exports the required providers and directives for template-driven forms, making them available for import by NgModules that import this module. | | `[ReactiveFormsModule](forms/reactiveformsmodule)` | Exports the required infrastructure and directives for reactive forms, making them available for import by NgModules that import this module. | ### Classes | | | | --- | --- | | `[AbstractControl](forms/abstractcontrol)` | This is the base class for `[FormControl](forms/formcontrol)`, `[FormGroup](forms/formgroup)`, and `[FormArray](forms/formarray)`. | | `[AbstractControlDirective](forms/abstractcontroldirective)` | Base class for control directives. | | `[ControlContainer](forms/controlcontainer)` | A base class for directives that contain multiple registered instances of `[NgControl](forms/ngcontrol)`. Only used by the forms module. | | `[FormArray](forms/formarray)` | Tracks the value and validity state of an array of `[FormControl](forms/formcontrol)`, `[FormGroup](forms/formgroup)` or `[FormArray](forms/formarray)` instances. | | `[FormBuilder](forms/formbuilder)` | Creates an `[AbstractControl](forms/abstractcontrol)` from a user-specified configuration. | | `[FormGroup](forms/formgroup)` | Tracks the value and validity state of a group of `[FormControl](forms/formcontrol)` instances. | | `[FormRecord](forms/formrecord)` | Tracks the value and validity state of a collection of `[FormControl](forms/formcontrol)` instances, each of which has the same value type. | | `[NgControl](forms/ngcontrol)` | A base class that all `[FormControl](forms/formcontrol)`-based directives extend. It binds a `[FormControl](forms/formcontrol)` object to a DOM element. | | `[NonNullableFormBuilder](forms/nonnullableformbuilder)` | `[NonNullableFormBuilder](forms/nonnullableformbuilder)` is similar to [`FormBuilder`](forms/formbuilder), but automatically constructed [`FormControl`](forms/formcontrol) elements have `{nonNullable: true}` and are non-nullable. | | `[UntypedFormBuilder](forms/untypedformbuilder)` | UntypedFormBuilder is the same as @see FormBuilder, but it provides untyped controls. | | `[Validators](forms/validators)` | Provides a set of built-in validators that can be used by form controls. | ### Structures | | | | --- | --- | | `[AbstractControlOptions](forms/abstractcontroloptions)` | Interface for options provided to an `[AbstractControl](forms/abstractcontrol)`. | | `[AsyncValidator](forms/asyncvalidator)` | An interface implemented by classes that perform asynchronous validation. | | `[AsyncValidatorFn](forms/asyncvalidatorfn)` | A function that receives a control and returns a Promise or observable that emits validation errors if present, otherwise null. | | `[ControlValueAccessor](forms/controlvalueaccessor)` | Defines an interface that acts as a bridge between the Angular forms API and a native element in the DOM. | | `[Form](forms/form)` | An interface implemented by `[FormGroupDirective](forms/formgroupdirective)` and `[NgForm](forms/ngform)` directives. | | `[FormControlOptions](forms/formcontroloptions)` | Interface for options provided to a `[FormControl](forms/formcontrol)`. | | `[FormControlState](forms/formcontrolstate)` | FormControlState is a boxed form value. It is an object with a `value` key and a `disabled` key. | | `[Validator](forms/validator)` | An interface implemented by classes that perform synchronous validation. | | `[ValidatorFn](forms/validatorfn)` | A function that receives a control and synchronously returns a map of validation errors if present, otherwise null. | ### Directives | | | | --- | --- | | `[AbstractFormGroupDirective](forms/abstractformgroupdirective)` | A base class for code shared between the `[NgModelGroup](forms/ngmodelgroup)` and `[FormGroupName](forms/formgroupname)` directives. | | `[CheckboxControlValueAccessor](forms/checkboxcontrolvalueaccessor)` | A `[ControlValueAccessor](forms/controlvalueaccessor)` for writing a value and listening to changes on a checkbox input element. | | `[CheckboxRequiredValidator](forms/checkboxrequiredvalidator)` | A Directive that adds the `required` validator to checkbox controls marked with the `required` attribute. The directive is provided with the `[NG\_VALIDATORS](forms/ng_validators)` multi-provider list. | | `[DefaultValueAccessor](forms/defaultvalueaccessor)` | The default `[ControlValueAccessor](forms/controlvalueaccessor)` for writing a value and listening to changes on input elements. The accessor is used by the `[FormControlDirective](forms/formcontroldirective)`, `[FormControlName](forms/formcontrolname)`, and `[NgModel](forms/ngmodel)` directives. | | `[EmailValidator](forms/emailvalidator)` | A directive that adds the `[email](forms/emailvalidator)` validator to controls marked with the `[email](forms/emailvalidator)` attribute. The directive is provided with the `[NG\_VALIDATORS](forms/ng_validators)` multi-provider list. | | `[FormArrayName](forms/formarrayname)` | Syncs a nested `[FormArray](forms/formarray)` to a DOM element. | | `[FormControlDirective](forms/formcontroldirective)` | Synchronizes a standalone `[FormControl](forms/formcontrol)` instance to a form control element. | | `[FormControlName](forms/formcontrolname)` | Syncs a `[FormControl](forms/formcontrol)` in an existing `[FormGroup](forms/formgroup)` to a form control element by name. | | `[FormGroupDirective](forms/formgroupdirective)` | Binds an existing `[FormGroup](forms/formgroup)` or `[FormRecord](forms/formrecord)` to a DOM element. | | `[FormGroupName](forms/formgroupname)` | Syncs a nested `[FormGroup](forms/formgroup)` or `[FormRecord](forms/formrecord)` to a DOM element. | | `[MaxLengthValidator](forms/maxlengthvalidator)` | A directive that adds max length validation to controls marked with the `[maxlength](forms/maxlengthvalidator)` attribute. The directive is provided with the `[NG\_VALIDATORS](forms/ng_validators)` multi-provider list. | | `[MaxValidator](forms/maxvalidator)` | A directive which installs the [`MaxValidator`](forms/maxvalidator) for any `[formControlName](forms/formcontrolname)`, `formControl`, or control with `[ngModel](forms/ngmodel)` that also has a `[max](forms/maxvalidator)` attribute. | | `[MinLengthValidator](forms/minlengthvalidator)` | A directive that adds minimum length validation to controls marked with the `[minlength](forms/minlengthvalidator)` attribute. The directive is provided with the `[NG\_VALIDATORS](forms/ng_validators)` multi-provider list. | | `[MinValidator](forms/minvalidator)` | A directive which installs the [`MinValidator`](forms/minvalidator) for any `[formControlName](forms/formcontrolname)`, `formControl`, or control with `[ngModel](forms/ngmodel)` that also has a `[min](forms/minvalidator)` attribute. | | `[NgControlStatus](forms/ngcontrolstatus)` | Directive automatically applied to Angular form controls that sets CSS classes based on control status. | | `[NgControlStatusGroup](forms/ngcontrolstatusgroup)` | Directive automatically applied to Angular form groups that sets CSS classes based on control status (valid/invalid/dirty/etc). On groups, this includes the additional class ng-submitted. | | `[NgForm](forms/ngform)` | Creates a top-level `[FormGroup](forms/formgroup)` instance and binds it to a form to track aggregate form value and validation status. | | `[NgModel](forms/ngmodel)` | Creates a `[FormControl](forms/formcontrol)` instance from a domain model and binds it to a form control element. | | `[NgModelGroup](forms/ngmodelgroup)` | Creates and binds a `[FormGroup](forms/formgroup)` instance to a DOM element. | | `[NgSelectOption](forms/ngselectoption)` | Marks `<option>` as dynamic, so Angular can be notified when options change. | | `[NumberValueAccessor](forms/numbervalueaccessor)` | The `[ControlValueAccessor](forms/controlvalueaccessor)` for writing a number value and listening to number input changes. The value accessor is used by the `[FormControlDirective](forms/formcontroldirective)`, `[FormControlName](forms/formcontrolname)`, and `[NgModel](forms/ngmodel)` directives. | | `[PatternValidator](forms/patternvalidator)` | A directive that adds regex pattern validation to controls marked with the `[pattern](forms/patternvalidator)` attribute. The regex must match the entire control value. The directive is provided with the `[NG\_VALIDATORS](forms/ng_validators)` multi-provider list. | | `[RadioControlValueAccessor](forms/radiocontrolvalueaccessor)` | The `[ControlValueAccessor](forms/controlvalueaccessor)` for writing radio control values and listening to radio control changes. The value accessor is used by the `[FormControlDirective](forms/formcontroldirective)`, `[FormControlName](forms/formcontrolname)`, and `[NgModel](forms/ngmodel)` directives. | | `[RangeValueAccessor](forms/rangevalueaccessor)` | The `[ControlValueAccessor](forms/controlvalueaccessor)` for writing a range value and listening to range input changes. The value accessor is used by the `[FormControlDirective](forms/formcontroldirective)`, `[FormControlName](forms/formcontrolname)`, and `[NgModel](forms/ngmodel)` directives. | | `[RequiredValidator](forms/requiredvalidator)` | A directive that adds the `required` validator to any controls marked with the `required` attribute. The directive is provided with the `[NG\_VALIDATORS](forms/ng_validators)` multi-provider list. | | `[SelectControlValueAccessor](forms/selectcontrolvalueaccessor)` | The `[ControlValueAccessor](forms/controlvalueaccessor)` for writing select control values and listening to select control changes. The value accessor is used by the `[FormControlDirective](forms/formcontroldirective)`, `[FormControlName](forms/formcontrolname)`, and `[NgModel](forms/ngmodel)` directives. | | `[SelectMultipleControlValueAccessor](forms/selectmultiplecontrolvalueaccessor)` | The `[ControlValueAccessor](forms/controlvalueaccessor)` for writing multi-select control values and listening to multi-select control changes. The value accessor is used by the `[FormControlDirective](forms/formcontroldirective)`, `[FormControlName](forms/formcontrolname)`, and `[NgModel](forms/ngmodel)` directives. | ### Types | | | | --- | --- | | `[COMPOSITION\_BUFFER\_MODE](forms/composition_buffer_mode)` | Provide this token to control if form directives buffer IME input until the "compositionend" event occurs. | | `[ControlConfig](forms/controlconfig)` | ControlConfig is a tuple containing a value of type T, plus optional validators and async validators. | | `[FormControlStatus](forms/formcontrolstatus)` | A form can have several different statuses. Each possible status is returned as a string literal. | | `[NG\_ASYNC\_VALIDATORS](forms/ng_async_validators)` | An `[InjectionToken](core/injectiontoken)` for registering additional asynchronous validators used with `[AbstractControl](forms/abstractcontrol)`s. | | `[NG\_VALIDATORS](forms/ng_validators)` | An `[InjectionToken](core/injectiontoken)` for registering additional synchronous validators used with `[AbstractControl](forms/abstractcontrol)`s. | | `[NG\_VALUE\_ACCESSOR](forms/ng_value_accessor)` | Used to provide a `[ControlValueAccessor](forms/controlvalueaccessor)` for form controls. | | `[SetDisabledStateOption](forms/setdisabledstateoption)` | The type for CALL\_SET\_DISABLED\_STATE. If `always`, then ControlValueAccessor will always call `setDisabledState` when attached, which is the most correct behavior. Otherwise, it will only be called when disabled, which is the legacy behavior for compatibility. | | `[UntypedFormArray](forms/untypedformarray)` | UntypedFormArray is a non-strongly-typed version of @see FormArray, which permits heterogenous controls. | | `[UntypedFormControl](forms/untypedformcontrol)` | UntypedFormControl is a non-strongly-typed version of @see FormControl. | | `[UntypedFormGroup](forms/untypedformgroup)` | UntypedFormGroup is a non-strongly-typed version of @see FormGroup. | | `[ValidationErrors](forms/validationerrors)` | Defines the map of errors returned from failed validation checks. | | `[isFormArray](forms/isformarray)` | Asserts that the given control is an instance of `[FormArray](forms/formarray)` | | `[isFormControl](forms/isformcontrol)` | Asserts that the given control is an instance of `[FormControl](forms/formcontrol)` | | `[isFormGroup](forms/isformgroup)` | Asserts that the given control is an instance of `[FormGroup](forms/formgroup)` | | `[isFormRecord](forms/isformrecord)` | Asserts that the given control is an instance of `[FormRecord](forms/formrecord)` | angular @angular/service-worker @angular/service-worker ======================= `package` Implements a service worker for Angular apps. Adding a service worker to an Angular app is one of the steps for turning it into a Progressive Web App (also known as a PWA). At its simplest, a service worker is a script that runs in the web browser and manages caching for an application. Service workers function as a network proxy. They intercept all outgoing HTTP requests made by the application and can choose how to respond to them. To set up the Angular service worker in your project, use the CLI `add` command. ``` ng add @angular/pwa ``` The command configures your app to use service workers by adding the service-worker package and generating the necessary support files. For more usage information, see the [Service Workers](../guide/service-worker-intro) guide. Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/service-worker](service-worker#primary-entry-point-exports)` | Implements a service worker for Angular apps. Adding a service worker to an Angular app is one of the steps for turning it into a Progressive Web App (also known as a PWA). | Primary entry point exports --------------------------- ### NgModules | | | | --- | --- | | `[ServiceWorkerModule](service-worker/serviceworkermodule)` | | ### Classes | | | | --- | --- | | `[SwPush](service-worker/swpush)` | Subscribe and listen to [Web Push Notifications](https://developer.mozilla.org/en-US/docs/Web/API/Push_API/Best_Practices) through Angular Service Worker. | | `[SwRegistrationOptions](service-worker/swregistrationoptions)` | Token that can be used to provide options for `[ServiceWorkerModule](service-worker/serviceworkermodule)` outside of `[ServiceWorkerModule.register()](service-worker/serviceworkermodule#register)`. | | `[SwUpdate](service-worker/swupdate)` | Subscribe to update notifications from the Service Worker, trigger update checks, and forcibly activate updates. | ### Structures | | | | --- | --- | | `[NoNewVersionDetectedEvent](service-worker/nonewversiondetectedevent)` | An event emitted when the service worker has checked the version of the app on the server and it didn't find a new version that it doesn't have already downloaded. | | `[UnrecoverableStateEvent](service-worker/unrecoverablestateevent)` | An event emitted when the version of the app used by the service worker to serve this client is in a broken state that cannot be recovered from and a full page reload is required. | | `[UpdateActivatedEvent](service-worker/updateactivatedevent)` | **Deprecated:** This event is only emitted by the deprecated [`SwUpdate`](service-worker/swupdate#activated). Use the return value of [`SwUpdate`](service-worker/swupdate#activateUpdate) instead. An event emitted when a new version of the app has been downloaded and activated. | | `[UpdateAvailableEvent](service-worker/updateavailableevent)` | **Deprecated:** This event is only emitted by the deprecated [`SwUpdate`](service-worker/swupdate#available). Use the [`VersionReadyEvent`](service-worker/versionreadyevent) instead, which is emitted by [`SwUpdate`](service-worker/swupdate#versionUpdates). See [`SwUpdate`](service-worker/swupdate#available) docs for an example. An event emitted when a new version of the app is available. | | `[VersionDetectedEvent](service-worker/versiondetectedevent)` | An event emitted when the service worker has detected a new version of the app on the server and is about to start downloading it. | | `[VersionInstallationFailedEvent](service-worker/versioninstallationfailedevent)` | An event emitted when the installation of a new version failed. It may be used for logging/monitoring purposes. | | `[VersionReadyEvent](service-worker/versionreadyevent)` | An event emitted when a new version of the app is available. | ### Types | | | | --- | --- | | `[VersionEvent](service-worker/versionevent)` | A union of all event types that can be emitted by [SwUpdate#versionUpdates](service-worker/swupdate#versionUpdates). |
programming_docs
angular @angular/core @angular/core ============= `package` Implements Angular's core functionality, low-level services, and utilities. * Defines the class infrastructure for components, view hierarchies, change detection, rendering, and event handling. * Defines the decorators that supply metadata and context for Angular constructs. * Defines infrastructure for dependency injection (DI), internationalization (i18n), and various testing and debugging facilities. Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/core](core#primary-entry-point-exports)` | Implements Angular's core functionality, low-level services, and utilities. | ### Secondary | | | | --- | --- | | `[@angular/core/global](core/global)` | Exposes a set of functions in the global namespace which are useful for debugging the current state of your application. These functions are exposed via the global `ng` "namespace" variable automatically when you import from `@angular/core` and run your application in development mode. These functions are not exposed when the application runs in a production mode. | | `[@angular/core/testing](core/testing)` | Provides infrastructure for testing Angular core functionality. | Primary entry point exports --------------------------- ### NgModules | | | | --- | --- | | `[ApplicationModule](core/applicationmodule)` | Re-exported by `[BrowserModule](platform-browser/browsermodule)`, which is included automatically in the root `AppModule` when you create a new app with the CLI `new` command. Eagerly injects `[ApplicationRef](core/applicationref)` to instantiate it. | ### Classes | | | | --- | --- | | `[ApplicationInitStatus](core/applicationinitstatus)` | A class that reflects the state of running [`APP_INITIALIZER`](core/app_initializer) functions. | | `[ApplicationRef](core/applicationref)` | A reference to an Angular application running on a page. | | `[ChangeDetectorRef](core/changedetectorref)` | Base class that provides change detection functionality. A change-detection tree collects all views that are to be checked for changes. Use the methods to add and remove views from the tree, initiate change-detection, and explicitly mark views as *dirty*, meaning that they have changed and need to be re-rendered. | | `[Compiler](core/compiler)` | **Deprecated:** Ivy JIT mode doesn't require accessing this symbol. See [JIT API changes due to ViewEngine deprecation](../guide/deprecations#jit-api-changes) for additional context. Low-level service for running the angular compiler during runtime to create [`ComponentFactory`](core/componentfactory)s, which can later be used to create and render a Component instance. | | `[CompilerFactory](core/compilerfactory)` | **Deprecated:** Ivy JIT mode doesn't require accessing this symbol. See [JIT API changes due to ViewEngine deprecation](../guide/deprecations#jit-api-changes) for additional context. A factory for creating a Compiler | | `[ComponentFactory](core/componentfactory)` | **Deprecated:** Angular no longer requires Component factories. Please use other APIs where Component class can be used directly. Base class for a factory that can create a component dynamically. Instantiate a factory for a given type of component with `resolveComponentFactory()`. Use the resulting `ComponentFactory.create()` method to create a component of that type. | | `[ComponentFactoryResolver](core/componentfactoryresolver)` | **Deprecated:** Angular no longer requires Component factories. Please use other APIs where Component class can be used directly. A simple registry that maps `Components` to generated `[ComponentFactory](core/componentfactory)` classes that can be used to create instances of components. Use to obtain the factory for a given component type, then use the factory's `create()` method to create a component of that type. | | `[ComponentRef](core/componentref)` | Represents a component created by a `[ComponentFactory](core/componentfactory)`. Provides access to the component instance and related objects, and provides the means of destroying the instance. | | `[DebugElement](core/debugelement)` | | | `[DebugEventListener](core/debugeventlistener)` | | | `[DebugNode](core/debugnode)` | | | `[DefaultIterableDiffer](core/defaultiterablediffer)` | **Deprecated:** v4.0.0 - Should not be part of public API. | | `[ElementRef](core/elementref)` | A wrapper around a native element inside of a View. | | `[EmbeddedViewRef](core/embeddedviewref)` | Represents an Angular [view](../guide/glossary#view) in a view container. An [embedded view](../guide/glossary#view-tree) can be referenced from a component other than the hosting component whose template defines it, or it can be defined independently by a `[TemplateRef](core/templateref)`. | | `[EnvironmentInjector](core/environmentinjector)` | An `[Injector](core/injector)` that's part of the environment injector hierarchy, which exists outside of the component tree. | | `[ErrorHandler](core/errorhandler)` | Provides a hook for centralized exception handling. | | `[EventEmitter](core/eventemitter)` | Use in components with the `@[Output](core/output)` directive to emit custom events synchronously or asynchronously, and register handlers for those events by subscribing to an instance. | | `[InjectionToken](core/injectiontoken)` | Creates a token that can be used in a DI Provider. | | `[Injector](core/injector)` | Concrete injectors implement this interface. Injectors are configured with [providers](../guide/glossary#provider) that associate dependencies of various types with [injection tokens](../guide/glossary#di-token). | | `[IterableDiffers](core/iterablediffers)` | A repository of different iterable diffing strategies used by NgFor, NgClass, and others. | | `[KeyValueDiffers](core/keyvaluediffers)` | A repository of different Map diffing strategies used by NgClass, NgStyle, and others. | | `[ModuleWithComponentFactories](core/modulewithcomponentfactories)` | **Deprecated:** Ivy JIT mode doesn't require accessing this symbol. See [JIT API changes due to ViewEngine deprecation](../guide/deprecations#jit-api-changes) for additional context. Combination of NgModuleFactory and ComponentFactories. | | `[NgModuleFactory](core/ngmodulefactory)` | **Deprecated:** This class was mostly used as a part of ViewEngine-based JIT API and is no longer needed in Ivy JIT mode. See [JIT API changes due to ViewEngine deprecation](../guide/deprecations#jit-api-changes) for additional context. Angular provides APIs that accept NgModule classes directly (such as [PlatformRef.bootstrapModule](core/platformref#bootstrapModule) and [createNgModule](core/createngmodule)), consider switching to those APIs instead of using factory-based ones. | | `[NgModuleRef](core/ngmoduleref)` | Represents an instance of an `[NgModule](core/ngmodule)` created by an `[NgModuleFactory](core/ngmodulefactory)`. Provides access to the `[NgModule](core/ngmodule)` instance and related objects. | | `[NgProbeToken](core/ngprobetoken)` | A token for third-party components that can register themselves with NgProbe. | | `[NgZone](core/ngzone)` | An injectable service for executing work inside or outside of the Angular zone. | | `[PlatformRef](core/platformref)` | The Angular platform is the entry point for Angular on a web page. Each page has exactly one platform. Services (such as reflection) which are common to every Angular application running on the page are bound in its scope. A page's platform is initialized implicitly when a platform is created using a platform factory such as `PlatformBrowser`, or explicitly by calling the `[createPlatform](core/createplatform)()` function. | | `[Query](core/query)` | Base class for query metadata. | | `[QueryList](core/querylist)` | An unmodifiable list of items that Angular keeps up to date when the state of the application changes. | | `[ReflectiveInjector](core/reflectiveinjector)` | **Deprecated:** from v5 - slow and brings in a lot of code, Use `Injector.create` instead. A ReflectiveDependency injection container used for instantiating objects and resolving dependencies. | | `[ReflectiveKey](core/reflectivekey)` | **Deprecated:** No replacement A unique object used for retrieving items from the [`ReflectiveInjector`](core/reflectiveinjector). | | `[Renderer2](core/renderer2)` | Extend this base class to implement custom rendering. By default, Angular renders a template into DOM. You can use custom rendering to intercept rendering calls, or to render to something other than DOM. | | `[RendererFactory2](core/rendererfactory2)` | Creates and initializes a custom renderer that implements the `[Renderer2](core/renderer2)` base class. | | `[ResolvedReflectiveFactory](core/resolvedreflectivefactory)` | An internal resolved representation of a factory function created by resolving `[Provider](core/provider)`. | | `[Sanitizer](core/sanitizer)` | Sanitizer is used by the views to sanitize potentially dangerous values. | | `[SimpleChange](core/simplechange)` | Represents a basic change from a previous to a new value for a single property on a directive instance. Passed as a value in a [`SimpleChanges`](core/simplechanges) object to the `ngOnChanges` hook. | | `[TemplateRef](core/templateref)` | Represents an embedded template that can be used to instantiate embedded views. To instantiate embedded views based on a template, use the `[ViewContainerRef](core/viewcontainerref)` method `createEmbeddedView()`. | | `[Testability](core/testability)` | The Testability service provides testing hooks that can be accessed from the browser. | | `[TestabilityRegistry](core/testabilityregistry)` | A global registry of [`Testability`](core/testability) instances for specific elements. | | `[Type](core/type)` | Represents a type that a Component or other object is instances of. | | `[Version](core/version)` | Represents the version of Angular | | `[ViewContainerRef](core/viewcontainerref)` | Represents a container where one or more views can be attached to a component. | | `[ViewRef](core/viewref)` | Represents an Angular [view](../guide/glossary#view "Definition"). | ### Decorators | | | | --- | --- | | `[Attribute](core/attribute)` | Parameter decorator for a directive constructor that designates a host-element attribute whose value is injected as a constant string literal. | | `[Component](core/component)` | Decorator that marks a class as an Angular component and provides configuration metadata that determines how the component should be processed, instantiated, and used at runtime. | | `[ContentChild](core/contentchild)` | Property decorator that configures a content query. | | `[ContentChildren](core/contentchildren)` | Property decorator that configures a content query. | | `[Directive](core/directive)` | Decorator that marks a class as an Angular directive. You can define your own directives to attach custom behavior to elements in the DOM. | | `[Host](core/host)` | Parameter decorator on a view-provider parameter of a class constructor that tells the DI framework to resolve the view by checking injectors of child elements, and stop when reaching the host element of the current component. | | `[HostBinding](core/hostbinding)` | Decorator that marks a DOM property as a host-binding property and supplies configuration metadata. Angular automatically checks host property bindings during change detection, and if a binding changes it updates the host element of the directive. | | `[HostListener](core/hostlistener)` | Decorator that declares a DOM event to listen for, and provides a handler method to run when that event occurs. | | `[Inject](core/inject)` | Parameter decorator on a dependency parameter of a class constructor that specifies a custom provider of the dependency. | | `[Injectable](core/injectable)` | Decorator that marks a class as available to be provided and injected as a dependency. | | `[Input](core/input)` | Decorator that marks a class field as an input property and supplies configuration metadata. The input property is bound to a DOM property in the template. During change detection, Angular automatically updates the data property with the DOM property's value. | | `[NgModule](core/ngmodule)` | Decorator that marks a class as an NgModule and supplies configuration metadata. | | `[Optional](core/optional)` | Parameter decorator to be used on constructor parameters, which marks the parameter as being an optional dependency. The DI framework provides `null` if the dependency is not found. | | `[Output](core/output)` | Decorator that marks a class field as an output property and supplies configuration metadata. The DOM property bound to the output property is automatically updated during change detection. | | `[Pipe](core/pipe)` | Decorator that marks a class as pipe and supplies configuration metadata. | | `[Self](core/self)` | Parameter decorator to be used on constructor parameters, which tells the DI framework to start dependency resolution from the local injector. | | `[SkipSelf](core/skipself)` | Parameter decorator to be used on constructor parameters, which tells the DI framework to start dependency resolution from the parent injector. Resolution works upward through the injector hierarchy, so the local injector is not checked for a provider. | | `[ViewChild](core/viewchild)` | Property decorator that configures a view query. The change detector looks for the first element or the directive matching the selector in the view DOM. If the view DOM changes, and a new child matches the selector, the property is updated. | | `[ViewChildren](core/viewchildren)` | Property decorator that configures a view query. | ### Functions | | | | --- | --- | | `[asNativeElements](core/asnativeelements)` | | | `[assertPlatform](core/assertplatform)` | Checks that there is currently a platform that contains the given token as a provider. | | `[createComponent](core/createcomponent)` | Creates a `[ComponentRef](core/componentref)` instance based on provided component type and a set of options. | | `[createEnvironmentInjector](core/createenvironmentinjector)` | Create a new environment injector. | | `[createNgModule](core/createngmodule)` | Returns a new NgModuleRef instance based on the NgModule class and parent injector provided. | | `[createPlatform](core/createplatform)` | Creates a platform. Platforms must be created on launch using this function. | | `[createPlatformFactory](core/createplatformfactory)` | Creates a factory for a platform. Can be used to provide or override `Providers` specific to your application's runtime needs, such as `[PLATFORM\_INITIALIZER](core/platform_initializer)` and `[PLATFORM\_ID](core/platform_id)`. | | `[destroyPlatform](core/destroyplatform)` | Destroys the current Angular platform and all Angular applications on the page. Destroys all modules and listeners registered with the platform. | | `[enableProdMode](core/enableprodmode)` | Disable Angular's development mode, which turns off assertions and other checks within the framework. | | `[forwardRef](core/forwardref)` | Allows to refer to references which are not yet defined. | | `[getDebugNode](core/getdebugnode)` | | | `[getModuleFactory](core/getmodulefactory)` | **Deprecated:** Use `[getNgModuleById](core/getngmodulebyid)` instead. Returns the NgModuleFactory with the given id (specified using [@NgModule.id field](core/ngmodule#id)), if it exists and has been loaded. Factories for NgModules that do not specify an `id` cannot be retrieved. Throws if an NgModule cannot be found. | | `[getNgModuleById](core/getngmodulebyid)` | Returns the NgModule class with the given id (specified using [@NgModule.id field](core/ngmodule#id)), if it exists and has been loaded. Classes for NgModules that do not specify an `id` cannot be retrieved. Throws if an NgModule cannot be found. | | `[getPlatform](core/getplatform)` | Returns the current platform. | | `[importProvidersFrom](core/importprovidersfrom)` | Collects providers from all NgModules and standalone components, including transitively imported ones. | | `[inject](core/inject)` | Injects a token from the currently active injector. `inject` is only supported during instantiation of a dependency by the DI system. It can be used during:* Construction (via the `constructor`) of a class being instantiated by the DI system, such as an `@[Injectable](core/injectable)` or `@[Component](core/component)`. * In the initializer for fields of such classes. * In the factory function specified for `useFactory` of a `[Provider](core/provider)` or an `@[Injectable](core/injectable)`. * In the `factory` function specified for an `[InjectionToken](core/injectiontoken)`. | | `[isDevMode](core/isdevmode)` | Returns whether Angular is in development mode. | | `[isStandalone](core/isstandalone)` | Checks whether a given Component, Directive or Pipe is marked as standalone. This will return false if passed anything other than a Component, Directive, or Pipe class See this guide for additional information: [https://angular.io/guide/standalone-components](../guide/standalone-components) | | `[makeEnvironmentProviders](core/makeenvironmentproviders)` | Wrap an array of `[Provider](core/provider)`s into `[EnvironmentProviders](core/environmentproviders)`, preventing them from being accidentally referenced in `@Component in a component injector. | | `[reflectComponentType](core/reflectcomponenttype)` | Creates an object that allows to retrieve component metadata. | | `[resolveForwardRef](core/resolveforwardref)` | Lazily retrieves the reference value from a forwardRef. | | `[setTestabilityGetter](core/settestabilitygetter)` | Set the [`GetTestability`](core/gettestability) implementation used by the Angular testing framework. | ### Structures | | | | --- | --- | | `[AbstractType](core/abstracttype)` | Represents an abstract class `T`, if applied to a concrete class it would stop being instantiable. | | `[AfterContentChecked](core/aftercontentchecked)` | A lifecycle hook that is called after the default change detector has completed checking all content of a directive. | | `[AfterContentInit](core/aftercontentinit)` | A lifecycle hook that is called after Angular has fully initialized all content of a directive. Define an `ngAfterContentInit()` method to handle any additional initialization tasks. | | `[AfterViewChecked](core/afterviewchecked)` | A lifecycle hook that is called after the default change detector has completed checking a component's view for changes. | | `[AfterViewInit](core/afterviewinit)` | A lifecycle hook that is called after Angular has fully initialized a component's view. Define an `ngAfterViewInit()` method to handle any additional initialization tasks. | | `[BootstrapOptions](core/bootstrapoptions)` | Provides additional options to the bootstrapping process. | | `[ChangeDetectionStrategy](core/changedetectionstrategy)` | The strategy that the default change detector uses to detect changes. When set, takes effect the next time change detection is triggered. | | `[ClassProvider](core/classprovider)` | Configures the `[Injector](core/injector)` to return an instance of `useClass` for a token. | | `[ClassSansProvider](core/classsansprovider)` | Configures the `[Injector](core/injector)` to return a value by invoking a `useClass` function. Base for `[ClassProvider](core/classprovider)` decorator. | | `[ComponentMirror](core/componentmirror)` | An interface that describes the subset of component metadata that can be retrieved using the `[reflectComponentType](core/reflectcomponenttype)` function. | | `[ConstructorProvider](core/constructorprovider)` | Configures the `[Injector](core/injector)` to return an instance of a token. | | `[ConstructorSansProvider](core/constructorsansprovider)` | Configures the `[Injector](core/injector)` to return an instance of a token. | | `[DoBootstrap](core/dobootstrap)` | Hook for manual bootstrapping of the application instead of using `bootstrap` array in @NgModule annotation. This hook is invoked only when the `bootstrap` array is empty or not provided. | | `[DoCheck](core/docheck)` | A lifecycle hook that invokes a custom change-detection function for a directive, in addition to the check performed by the default change-detector. | | `[ExistingProvider](core/existingprovider)` | Configures the `[Injector](core/injector)` to return a value of another `useExisting` token. | | `[ExistingSansProvider](core/existingsansprovider)` | Configures the `[Injector](core/injector)` to return a value of another `useExisting` token. | | `[FactoryProvider](core/factoryprovider)` | Configures the `[Injector](core/injector)` to return a value by invoking a `useFactory` function. | | `[FactorySansProvider](core/factorysansprovider)` | Configures the `[Injector](core/injector)` to return a value by invoking a `useFactory` function. | | `[ForwardRefFn](core/forwardreffn)` | An interface that a function passed into [`forwardRef`](core/forwardref) has to implement. | | `[GetTestability](core/gettestability)` | Adapter interface for retrieving the `[Testability](core/testability)` service associated for a particular context. | | `[InjectFlags](core/injectflags)` | **Deprecated:** use an options object for `inject` instead. Injection flags for DI. | | `[InjectOptions](core/injectoptions)` | Type of the options argument to `inject`. | | `[InjectableType](core/injectabletype)` | A `[Type](core/type)` which has a `ɵprov: ɵɵInjectableDeclaration` static field. | | `[InjectorType](core/injectortype)` | A type which has an `InjectorDef` static field. | | `[IterableChangeRecord](core/iterablechangerecord)` | Record representing the item change information. | | `[IterableChanges](core/iterablechanges)` | An object describing the changes in the `Iterable` collection since last time `[IterableDiffer](core/iterablediffer)#diff()` was invoked. | | `[IterableDiffer](core/iterablediffer)` | A strategy for tracking changes over time to an iterable. Used by [`NgForOf`](common/ngforof) to respond to changes in an iterable by effecting equivalent changes in the DOM. | | `[IterableDifferFactory](core/iterabledifferfactory)` | Provides a factory for [`IterableDiffer`](core/iterablediffer). | | `[KeyValueChangeRecord](core/keyvaluechangerecord)` | Record representing the item change information. | | `[KeyValueChanges](core/keyvaluechanges)` | An object describing the changes in the `Map` or `{[k:string]: string}` since last time `[KeyValueDiffer](core/keyvaluediffer)#diff()` was invoked. | | `[KeyValueDiffer](core/keyvaluediffer)` | A differ that tracks changes made to an object over time. | | `[KeyValueDifferFactory](core/keyvaluedifferfactory)` | Provides a factory for [`KeyValueDiffer`](core/keyvaluediffer). | | `[MissingTranslationStrategy](core/missingtranslationstrategy)` | Use this enum at bootstrap as an option of `bootstrapModule` to define the strategy that the compiler should use in case of missing translations:* Error: throw if you have missing translations. * Warning (default): show a warning in the console and/or shell. * Ignore: do nothing. | | `[ModuleWithProviders](core/modulewithproviders)` | A wrapper around an NgModule that associates it with [providers](../guide/glossary#provider "Definition"). Usage without a generic type is deprecated. | | `[OnChanges](core/onchanges)` | A lifecycle hook that is called when any data-bound property of a directive changes. Define an `ngOnChanges()` method to handle the changes. | | `[OnDestroy](core/ondestroy)` | A lifecycle hook that is called when a directive, pipe, or service is destroyed. Use for any custom cleanup that needs to occur when the instance is destroyed. | | `[OnInit](core/oninit)` | A lifecycle hook that is called after Angular has initialized all data-bound properties of a directive. Define an `ngOnInit()` method to handle any additional initialization tasks. | | `[PipeTransform](core/pipetransform)` | An interface that is implemented by pipes in order to perform a transformation. Angular invokes the `transform` method with the value of a binding as the first argument, and any parameters as the second argument in list form. | | `[Predicate](core/predicate)` | A boolean-valued function over a value, possibly including context information regarding that value's position in an array. | | `[RendererStyleFlags2](core/rendererstyleflags2)` | Flags for renderer-specific style modifiers. | | `[RendererType2](core/renderertype2)` | Used by `[RendererFactory2](core/rendererfactory2)` to associate custom rendering data and styles with a rendering implementation. | | `[ResolvedReflectiveProvider](core/resolvedreflectiveprovider)` | An internal resolved representation of a `[Provider](core/provider)` used by the `[Injector](core/injector)`. | | `[SchemaMetadata](core/schemametadata)` | A schema definition associated with an NgModule. | | `[SecurityContext](core/securitycontext)` | A SecurityContext marks a location that has dangerous security implications, e.g. a DOM property like `innerHTML` that could cause Cross Site Scripting (XSS) security bugs when improperly handled. | | `[SimpleChanges](core/simplechanges)` | A hashtable of changes represented by [`SimpleChange`](core/simplechange) objects stored at the declared property name they belong to on a Directive or Component. This is the type passed to the `ngOnChanges` hook. | | `[StaticClassProvider](core/staticclassprovider)` | Configures the `[Injector](core/injector)` to return an instance of `useClass` for a token. | | `[StaticClassSansProvider](core/staticclasssansprovider)` | Configures the `[Injector](core/injector)` to return an instance of `useClass` for a token. Base for `[StaticClassProvider](core/staticclassprovider)` decorator. | | `[TrackByFunction](core/trackbyfunction)` | A function optionally passed into the `[NgForOf](common/ngforof)` directive to customize how `[NgForOf](common/ngforof)` uniquely identifies items in an iterable. | | `[TypeDecorator](core/typedecorator)` | An interface implemented by all Angular type decorators, which allows them to be used as decorators as well as Angular syntax. | | `[TypeProvider](core/typeprovider)` | Configures the `[Injector](core/injector)` to return an instance of `[Type](core/type)` when `Type' is used as the token. | | `[ValueProvider](core/valueprovider)` | Configures the `[Injector](core/injector)` to return a value for a token. | | `[ValueSansProvider](core/valuesansprovider)` | Configures the `[Injector](core/injector)` to return a value for a token. Base for `[ValueProvider](core/valueprovider)` decorator. | | `[ViewEncapsulation](core/viewencapsulation)` | Defines the CSS styles encapsulation policies for the [`Component`](core/component) decorator's `encapsulation` option. | ### Elements | | | | --- | --- | | `[<ng-container>](core/ng-container)` | A special element that can hold structural directives without adding new elements to the DOM. | | `[<ng-content>](core/ng-content)` | The `[<ng-content>](core/ng-content)` element specifies where to project content inside a component template. | | `[<ng-template>](core/ng-template)` | Angular's `[<ng-template>](core/ng-template)` element defines a template that is not rendered by default. | ### Types | | | | --- | --- | | `[ANALYZE\_FOR\_ENTRY\_COMPONENTS](core/analyze_for_entry_components)` | **Deprecated:** Since 9.0.0. With Ivy, this property is no longer necessary. A DI token that you can use to create a virtual [provider](../guide/glossary#provider) that will populate the `entryComponents` field of components and NgModules based on its `useValue` property value. All components that are referenced in the `useValue` value (either directly or in a nested array or map) are added to the `entryComponents` property. | | `[ANIMATION\_MODULE\_TYPE](core/animation_module_type)` | A [DI token](../guide/glossary#di-token "DI token definition") that indicates which animations module has been loaded. | | `[APP\_BOOTSTRAP\_LISTENER](core/app_bootstrap_listener)` | A [DI token](../guide/glossary#di-token "DI token definition") that provides a set of callbacks to be called for every component that is bootstrapped. | | `[APP\_ID](core/app_id)` | A [DI token](../guide/glossary#di-token "DI token definition") representing a unique string ID, used primarily for prefixing application attributes and CSS styles when [ViewEncapsulation.Emulated](core/viewencapsulation#Emulated) is being used. | | `[APP\_INITIALIZER](core/app_initializer)` | A [DI token](../guide/glossary#di-token "DI token definition") that you can use to provide one or more initialization functions. | | `[COMPILER\_OPTIONS](core/compiler_options)` | Token to provide CompilerOptions in the platform injector. | | `[CUSTOM\_ELEMENTS\_SCHEMA](core/custom_elements_schema)` | Defines a schema that allows an NgModule to contain the following:* Non-Angular elements named with dash case (`-`). * Element properties named with dash case (`-`). Dash case is the naming convention for custom elements. | | `[CompilerOptions](core/compileroptions)` | Options for creating a compiler. | | `[DEFAULT\_CURRENCY\_CODE](core/default_currency_code)` | Provide this token to set the default currency code your application uses for CurrencyPipe when there is no currency code passed into it. This is only used by CurrencyPipe and has no relation to locale currency. Defaults to USD if not configured. | | `[ENVIRONMENT\_INITIALIZER](core/environment_initializer)` | A multi-provider token for initialization functions that will run upon construction of an environment injector. | | `[EnvironmentProviders](core/environmentproviders)` | Encapsulated `[Provider](core/provider)`s that are only accepted during creation of an `[EnvironmentInjector](core/environmentinjector)` (e.g. in an `[NgModule](core/ngmodule)`). | | `[INJECTOR](core/injector)` | An InjectionToken that gets the current `[Injector](core/injector)` for `createInjector()`-style injectors. | | `[ImportProvidersSource](core/importproviderssource)` | A source of providers for the `[importProvidersFrom](core/importprovidersfrom)` function. | | `[ImportedNgModuleProviders](core/importedngmoduleproviders)` | **Deprecated:** replaced by `[EnvironmentProviders](core/environmentproviders)` Providers that were imported from NgModules via the `[importProvidersFrom](core/importprovidersfrom)` function. | | `[InjectableProvider](core/injectableprovider)` | Injectable providers used in `@[Injectable](core/injectable)` decorator. | | `[LOCALE\_ID](core/locale_id)` | Provide this token to set the locale of your application. It is used for i18n extraction, by i18n pipes (DatePipe, I18nPluralPipe, CurrencyPipe, DecimalPipe and PercentPipe) and by ICU expressions. | | `[NO\_ERRORS\_SCHEMA](core/no_errors_schema)` | Defines a schema that allows any property on any element. | | `[NgIterable](core/ngiterable)` | A type describing supported iterable types. | | `[PACKAGE\_ROOT\_URL](core/package_root_url)` | A [DI token](../guide/glossary#di-token "DI token definition") that indicates the root directory of the application | | `[PLATFORM\_ID](core/platform_id)` | A token that indicates an opaque platform ID. | | `[PLATFORM\_INITIALIZER](core/platform_initializer)` | A function that is executed when a platform is initialized. | | `[Provider](core/provider)` | Describes how the `[Injector](core/injector)` should be configured. | | `[ProviderToken](core/providertoken)` | Token that can be used to retrieve an instance from an injector or through a query. | | `[StaticProvider](core/staticprovider)` | Describes how an `[Injector](core/injector)` should be configured as static (that is, without reflection). A static provider provides tokens to an injector for various types of dependencies. | | `[TRANSLATIONS](core/translations)` | Use this token at bootstrap to provide the content of your translation file (`xtb`, `xlf` or `xlf2`) when you want to translate your application in another language. | | `[TRANSLATIONS\_FORMAT](core/translations_format)` | Provide this token at bootstrap to set the format of your [`TRANSLATIONS`](core/translations): `xtb`, `xlf` or `xlf2`. | | `[createNgModuleRef](core/createngmoduleref)` | **Deprecated:** Use `[createNgModule](core/createngmodule)` instead. The `[createNgModule](core/createngmodule)` function alias for backwards-compatibility. Please avoid using it directly and use `[createNgModule](core/createngmodule)` instead. | | `[defineInjectable](core/defineinjectable)` | **Deprecated:** in v8, delete after v10. This API should be used only by generated code, and that code should now use ɵɵdefineInjectable instead. | | `[platformCore](core/platformcore)` | This platform has to be included in any other platform |
programming_docs
angular @angular/platform-browser @angular/platform-browser ========================= `package` Supports execution of Angular apps on different supported browsers. The `[BrowserModule](platform-browser/browsermodule)` is included by default in any app created through the CLI, and it re-exports the `[CommonModule](common/commonmodule)` and `[ApplicationModule](core/applicationmodule)` exports, making basic Angular functionality available to the app. For more information, see [Browser Support](../guide/browser-support). Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/platform-browser](platform-browser#primary-entry-point-exports)` | Supports execution of Angular apps on different supported browsers. | ### Secondary | | | | --- | --- | | `[@angular/platform-browser/animations](platform-browser/animations)` | Provides infrastructure for the rendering of animations in supported browsers. | | `[@angular/platform-browser/testing](platform-browser/testing)` | Supplies a testing module for the Angular platform-browser subsystem. | Primary entry point exports --------------------------- ### NgModules | | | | --- | --- | | `[BrowserModule](platform-browser/browsermodule)` | Exports required infrastructure for all Angular apps. Included by default in all Angular apps created with the CLI `new` command. Re-exports `[CommonModule](common/commonmodule)` and `[ApplicationModule](core/applicationmodule)`, making their exports and providers available to all apps. | | `[BrowserTransferStateModule](platform-browser/browsertransferstatemodule)` | **Deprecated:** no longer needed, you can inject the `[TransferState](platform-browser/transferstate)` in an app without providing this module. NgModule to install on the client side while using the `[TransferState](platform-browser/transferstate)` to transfer state from server to client. | | `[HammerModule](platform-browser/hammermodule)` | Adds support for HammerJS. | ### Classes | | | | --- | --- | | `[By](platform-browser/by)` | Predicates for use with [`DebugElement`](core/debugelement)'s query functions. | | `[DomSanitizer](platform-browser/domsanitizer)` | DomSanitizer helps preventing Cross Site Scripting Security bugs (XSS) by sanitizing values to be safe to use in the different DOM contexts. | | `[EventManager](platform-browser/eventmanager)` | An injectable service that provides event management for Angular through a browser plug-in. | | `[HammerGestureConfig](platform-browser/hammergestureconfig)` | An injectable [HammerJS Manager](https://hammerjs.github.io/api/#hammermanager) for gesture recognition. Configures specific event recognition. | | `[Meta](platform-browser/meta)` | A service for managing HTML `<meta>` tags. | | `[Title](platform-browser/title)` | A service that can be used to get and set the title of a current HTML document. | | `[TransferState](platform-browser/transferstate)` | A key value store that is transferred from the application on the server side to the application on the client side. | ### Functions | | | | --- | --- | | `[bootstrapApplication](platform-browser/bootstrapapplication)` | Bootstraps an instance of an Angular application and renders a standalone component as the application's root component. More information about standalone components can be found in [this guide](../guide/standalone-components). | | `[createApplication](platform-browser/createapplication)` | Create an instance of an Angular application without bootstrapping any components. This is useful for the situation where one wants to decouple application environment creation (a platform and associated injectors) from rendering components on a screen. Components can be subsequently bootstrapped on the returned `[ApplicationRef](core/applicationref)`. | | `[disableDebugTools](platform-browser/disabledebugtools)` | Disables Angular tools. | | `[enableDebugTools](platform-browser/enabledebugtools)` | Enabled Angular debug tools that are accessible via your browser's developer console. | | `[makeStateKey](platform-browser/makestatekey)` | Create a `[StateKey](platform-browser/statekey)<T>` that can be used to store value of type T with `[TransferState](platform-browser/transferstate)`. | | `[provideProtractorTestingSupport](platform-browser/provideprotractortestingsupport)` | Returns a set of providers required to setup [Testability](core/testability) for an application bootstrapped using the `[bootstrapApplication](platform-browser/bootstrapapplication)` function. The set of providers is needed to support testing an application with Protractor (which relies on the Testability APIs to be present). | ### Structures | | | | --- | --- | | `[ApplicationConfig](platform-browser/applicationconfig)` | Set of config options available during the application bootstrap operation. | | `[SafeHtml](platform-browser/safehtml)` | Marker interface for a value that's safe to use as HTML. | | `[SafeResourceUrl](platform-browser/saferesourceurl)` | Marker interface for a value that's safe to use as a URL to load executable code from. | | `[SafeScript](platform-browser/safescript)` | Marker interface for a value that's safe to use as JavaScript. | | `[SafeStyle](platform-browser/safestyle)` | Marker interface for a value that's safe to use as style (CSS). | | `[SafeUrl](platform-browser/safeurl)` | Marker interface for a value that's safe to use as a URL linking to a document. | | `[SafeValue](platform-browser/safevalue)` | Marker interface for a value that's safe to use in a particular context. | ### Types | | | | --- | --- | | `[EVENT\_MANAGER\_PLUGINS](platform-browser/event_manager_plugins)` | The injection token for the event-manager plug-in service. | | `[HAMMER\_GESTURE\_CONFIG](platform-browser/hammer_gesture_config)` | DI token for providing [HammerJS](https://hammerjs.github.io/) support to Angular. | | `[HAMMER\_LOADER](platform-browser/hammer_loader)` | Injection token used to provide a [`HammerLoader`](platform-browser/hammerloader) to Angular. | | `[HammerLoader](platform-browser/hammerloader)` | Function that loads HammerJS, returning a promise that is resolved once HammerJs is loaded. | | `[MetaDefinition](platform-browser/metadefinition)` | Represents the attributes of an HTML `<meta>` element. The element itself is represented by the internal `HTMLMetaElement`. | | `[StateKey](platform-browser/statekey)` | A type-safe key to use with `[TransferState](platform-browser/transferstate)`. | | `[platformBrowser](platform-browser/platformbrowser)` | A factory function that returns a `[PlatformRef](core/platformref)` instance associated with browser service providers. | angular @angular/upgrade @angular/upgrade ================ `package` Provides support for upgrading applications from Angular JS to Angular. The primary entry point is deprecated. Use the secondary entry point, `<upgrade/static>`. See [Angular deprecation policy](../guide/deprecations). Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/upgrade](upgrade#primary-entry-point-exports)` | **Deprecated:** all exports of this entry point are deprecated. Provides support for upgrading applications from Angular JS to Angular. The primary entry point is deprecated. Use the secondary entry point, `<upgrade/static>`. | ### Secondary | | | | --- | --- | | `[@angular/upgrade/static](upgrade/static)` | Supports the upgrade path from AngularJS to Angular, allowing components and services from both systems to be used in the same application. | | `[@angular/upgrade/static/testing](upgrade/static/testing)` | Supplies testing functions for the AngularJS-to-Angular upgrade path. | Primary entry point exports --------------------------- **Deprecated:** all exports of this entry point are deprecated. ### Classes | | | | --- | --- | | `[UpgradeAdapter](upgrade/upgradeadapter)` | **Deprecated:** Deprecated since v5. Use `<upgrade/static>` instead, which also supports [Ahead-of-Time compilation](../guide/aot-compiler). Use `[UpgradeAdapter](upgrade/upgradeadapter)` to allow AngularJS and Angular to coexist in a single application. | | `[UpgradeAdapterRef](upgrade/upgradeadapterref)` | **Deprecated:** Deprecated since v5. Use `<upgrade/static>` instead, which also supports [Ahead-of-Time compilation](../guide/aot-compiler). Use `[UpgradeAdapterRef](upgrade/upgradeadapterref)` to control a hybrid AngularJS / Angular application. | angular @angular/elements @angular/elements ================= `package` Implements Angular's custom-element API, which enables you to package components as [custom elements](https://developer.mozilla.org/en-US/docs/Web/Web_Components/Using_custom_elements). A custom element extends HTML by allowing you to define a tag whose content is created and controlled by JavaScript code. The browser maintains a `CustomElementRegistry` of defined custom elements (also called Web Components), which maps an instantiable JavaScript class to an HTML tag. The `[createCustomElement](elements/createcustomelement)()` function provides a bridge from Angular's component interface and change detection functionality to the built-in DOM API. For more information, see [Angular Elements Overview](../guide/elements). Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/elements](elements#primary-entry-point-exports)` | Implements Angular's custom-element API, which enables you to package components as [custom elements](https://developer.mozilla.org/en-US/docs/Web/Web_Components/Using_custom_elements). | Primary entry point exports --------------------------- ### Classes | | | | --- | --- | | `[NgElement](elements/ngelement)` | Implements the functionality needed for a custom element. | ### Functions | | | | --- | --- | | `[createCustomElement](elements/createcustomelement)` | Creates a custom element class based on an Angular component. | ### Structures | | | | --- | --- | | `[NgElementConfig](elements/ngelementconfig)` | A configuration that initializes an NgElementConstructor with the dependencies and strategy it needs to transform a component into a custom element class. | | `[NgElementConstructor](elements/ngelementconstructor)` | Prototype for a class constructor based on an Angular component that can be used for custom element registration. Implemented and returned by the [createCustomElement() function](elements/createcustomelement). | | `[NgElementStrategy](elements/ngelementstrategy)` | Underlying strategy used by the NgElement to create/destroy the component and react to input changes. | | `[NgElementStrategyEvent](elements/ngelementstrategyevent)` | Interface for the events emitted through the NgElementStrategy. | | `[NgElementStrategyFactory](elements/ngelementstrategyfactory)` | Factory used to create new strategies for each NgElement instance. | ### Types | | | | --- | --- | | `[WithProperties](elements/withproperties)` | Additional type information that can be added to the NgElement class, for properties that are added based on the inputs and methods of the underlying component. | angular @angular/common @angular/common =============== `package` Implements fundamental Angular framework functionality, including directives and pipes, location services used in routing, HTTP services, localization support, and so on. The `[CommonModule](common/commonmodule)` exports are re-exported by `[BrowserModule](platform-browser/browsermodule)`, which is included automatically in the root `AppModule` when you create a new app with the CLI `new` command. Entry points ------------ ### Primary | | | | --- | --- | | `[@angular/common](common#primary-entry-point-exports)` | Implements fundamental Angular framework functionality, including directives and pipes, location services used in routing, HTTP services, localization support, and so on. | ### Secondary | | | | --- | --- | | `[@angular/common/http](common/http)` | Implements an HTTP client API for Angular apps that relies on the `XMLHttpRequest` interface exposed by browsers. | | `[@angular/common/http/testing](common/http/testing)` | Supplies a testing module for the Angular HTTP subsystem. | | `[@angular/common/testing](common/testing)` | Supplies infrastructure for testing functionality supplied by `@angular/common`. | | `[@angular/common/upgrade](common/upgrade)` | Provides tools for upgrading from the `$location` service provided in AngularJS to Angular's [unified location service](../guide/upgrade#using-the-unified-angular-location-service). | Primary entry point exports --------------------------- ### NgModules | | | | --- | --- | | `[CommonModule](common/commonmodule)` | Exports all the basic Angular directives and pipes, such as `[NgIf](common/ngif)`, `[NgForOf](common/ngforof)`, `[DecimalPipe](common/decimalpipe)`, and so on. Re-exported by `[BrowserModule](platform-browser/browsermodule)`, which is included automatically in the root `AppModule` when you create a new app with the CLI `new` command. | ### Classes | | | | --- | --- | | `[BrowserPlatformLocation](common/browserplatformlocation)` | `[PlatformLocation](common/platformlocation)` encapsulates all of the direct calls to platform APIs. This class should not be used directly by an application developer. Instead, use [`Location`](common/location). | | `[HashLocationStrategy](common/hashlocationstrategy)` | A [`LocationStrategy`](common/locationstrategy) used to configure the [`Location`](common/location) service to represent its state in the [hash fragment](https://en.wikipedia.org/wiki/Uniform_Resource_Locator#Syntax) of the browser's URL. | | `[Location](common/location)` | A service that applications can use to interact with a browser's URL. | | `[LocationStrategy](common/locationstrategy)` | Enables the `[Location](common/location)` service to read route state from the browser's URL. Angular provides two strategies: `[HashLocationStrategy](common/hashlocationstrategy)` and `[PathLocationStrategy](common/pathlocationstrategy)`. | | `[NgForOfContext](common/ngforofcontext)` | | | `[NgIfContext](common/ngifcontext)` | | | `[NgLocaleLocalization](common/nglocalelocalization)` | Returns the plural case based on the locale | | `[NgLocalization](common/nglocalization)` | | | `[PathLocationStrategy](common/pathlocationstrategy)` | A [`LocationStrategy`](common/locationstrategy) used to configure the [`Location`](common/location) service to represent its state in the [path](https://en.wikipedia.org/wiki/Uniform_Resource_Locator#Syntax) of the browser's URL. | | `[PlatformLocation](common/platformlocation)` | This class should not be used directly by an application developer. Instead, use [`Location`](common/location). | | `[ViewportScroller](common/viewportscroller)` | Defines a scroll position manager. Implemented by `BrowserViewportScroller`. | | `[XhrFactory](common/xhrfactory)` | A wrapper around the `XMLHttpRequest` constructor. | ### Functions | | | | --- | --- | | `[formatCurrency](common/formatcurrency)` | Formats a number as currency using locale rules. | | `[formatDate](common/formatdate)` | Formats a date according to locale rules. | | `[formatNumber](common/formatnumber)` | Formats a number as text, with group sizing, separator, and other parameters based on the locale. | | `[formatPercent](common/formatpercent)` | Formats a number as a percentage according to locale rules. | | `[getCurrencySymbol](common/getcurrencysymbol)` | Retrieves the currency symbol for a given currency code. | | `[getLocaleCurrencyCode](common/getlocalecurrencycode)` | Retrieves the default currency code for the given locale. | | `[getLocaleCurrencyName](common/getlocalecurrencyname)` | Retrieves the name of the currency for the main country corresponding to a given locale. For example, 'US Dollar' for `en-US`. | | `[getLocaleCurrencySymbol](common/getlocalecurrencysymbol)` | Retrieves the symbol used to represent the currency for the main country corresponding to a given locale. For example, '$' for `en-US`. | | `[getLocaleDateFormat](common/getlocaledateformat)` | Retrieves a localized date-value formatting string. | | `[getLocaleDateTimeFormat](common/getlocaledatetimeformat)` | Retrieves a localized date-time formatting string. | | `[getLocaleDayNames](common/getlocaledaynames)` | Retrieves days of the week for the given locale, using the Gregorian calendar. | | `[getLocaleDayPeriods](common/getlocaledayperiods)` | Retrieves day period strings for the given locale. | | `[getLocaleDirection](common/getlocaledirection)` | Retrieves the writing direction of a specified locale | | `[getLocaleEraNames](common/getlocaleeranames)` | Retrieves Gregorian-calendar eras for the given locale. | | `[getLocaleExtraDayPeriodRules](common/getlocaleextradayperiodrules)` | Retrieves locale-specific rules used to determine which day period to use when more than one period is defined for a locale. | | `[getLocaleExtraDayPeriods](common/getlocaleextradayperiods)` | Retrieves locale-specific day periods, which indicate roughly how a day is broken up in different languages. For example, for `en-US`, periods are morning, noon, afternoon, evening, and midnight. | | `[getLocaleFirstDayOfWeek](common/getlocalefirstdayofweek)` | Retrieves the first day of the week for the given locale. | | `[getLocaleId](common/getlocaleid)` | Retrieves the locale ID from the currently loaded locale. The loaded locale could be, for example, a global one rather than a regional one. | | `[getLocaleMonthNames](common/getlocalemonthnames)` | Retrieves months of the year for the given locale, using the Gregorian calendar. | | `[getLocaleNumberFormat](common/getlocalenumberformat)` | Retrieves a number format for a given locale. | | `[getLocaleNumberSymbol](common/getlocalenumbersymbol)` | Retrieves a localized number symbol that can be used to replace placeholders in number formats. | | `[getLocalePluralCase](common/getlocalepluralcase)` | Retrieves the plural function used by ICU expressions to determine the plural case to use for a given locale. | | `[getLocaleTimeFormat](common/getlocaletimeformat)` | Retrieves a localized time-value formatting string. | | `[getLocaleWeekEndRange](common/getlocaleweekendrange)` | Range of week days that are considered the week-end for the given locale. | | `[getNumberOfCurrencyDigits](common/getnumberofcurrencydigits)` | Reports the number of decimal digits for a given currency. The value depends upon the presence of cents in that particular currency. | | `[isPlatformBrowser](common/isplatformbrowser)` | Returns whether a platform id represents a browser platform. | | `[isPlatformServer](common/isplatformserver)` | Returns whether a platform id represents a server platform. | | `[isPlatformWorkerApp](common/isplatformworkerapp)` | Returns whether a platform id represents a web worker app platform. | | `[isPlatformWorkerUi](common/isplatformworkerui)` | Returns whether a platform id represents a web worker UI platform. | | `[registerLocaleData](common/registerlocaledata)` | Register global data to be used internally by Angular. See the ["I18n guide"](../guide/i18n-common-format-data-locale) to know how to import additional locale data. | ### Structures | | | | --- | --- | | `[DatePipeConfig](common/datepipeconfig)` | An interface that describes the date pipe configuration, which can be provided using the `[DATE\_PIPE\_DEFAULT\_OPTIONS](common/date_pipe_default_options)` token. | | `[FormStyle](common/formstyle)` | Context-dependant translation forms for strings. Typically the standalone version is for the nominative form of the word, and the format version is used for the genitive case. | | `[FormatWidth](common/formatwidth)` | String widths available for date-time formats. The specific character widths are locale-specific. Examples are given for `en-US`. | | `[ImageLoaderConfig](common/imageloaderconfig)` | Config options recognized by the image loader function. | | `[KeyValue](common/keyvalue)` | A key value pair. Usually used to represent the key value pairs from a Map or Object. | | `[LocationChangeEvent](common/locationchangeevent)` | A serializable version of the event from `onPopState` or `onHashChange` | | `[LocationChangeListener](common/locationchangelistener)` | | | `[NumberFormatStyle](common/numberformatstyle)` | Format styles that can be used to represent numbers. | | `[NumberSymbol](common/numbersymbol)` | Symbols that can be used to replace placeholders in number patterns. Examples are based on `en-US` values. | | `[Plural](common/plural)` | Plurality cases used for translating plurals to different languages. | | `[PopStateEvent](common/popstateevent)` | | | `[TranslationWidth](common/translationwidth)` | String widths available for translations. The specific character widths are locale-specific. Examples are given for the word "Sunday" in English. | | `[WeekDay](common/weekday)` | The value for each day of the week, based on the `en-US` locale | ### Directives | | | | --- | --- | | `[NgClass](common/ngclass)` | Adds and removes CSS classes on an HTML element. | | `[NgComponentOutlet](common/ngcomponentoutlet)` | Instantiates a [`Component`](core/component) type and inserts its Host View into the current View. `[NgComponentOutlet](common/ngcomponentoutlet)` provides a declarative approach for dynamic component creation. | | `[NgFor](common/ngfor)` | A [structural directive](../guide/structural-directives) that renders a template for each item in a collection. The directive is placed on an element, which becomes the parent of the cloned templates. | | `[NgForOf](common/ngforof)` | A [structural directive](../guide/structural-directives) that renders a template for each item in a collection. The directive is placed on an element, which becomes the parent of the cloned templates. | | `[NgIf](common/ngif)` | A structural directive that conditionally includes a template based on the value of an expression coerced to Boolean. When the expression evaluates to true, Angular renders the template provided in a `then` clause, and when false or null, Angular renders the template provided in an optional `else` clause. The default template for the `else` clause is blank. | | `[NgOptimizedImage](common/ngoptimizedimage)` | Directive that improves image loading performance by enforcing best practices. | | `[NgPlural](common/ngplural)` | Adds / removes DOM sub-trees based on a numeric value. Tailored for pluralization. | | `[NgPluralCase](common/ngpluralcase)` | Creates a view that will be added/removed from the parent [`NgPlural`](common/ngplural) when the given expression matches the plural expression according to CLDR rules. | | `[NgStyle](common/ngstyle)` | An attribute directive that updates styles for the containing HTML element. Sets one or more style properties, specified as colon-separated key-value pairs. The key is a style name, with an optional `.<unit>` suffix (such as 'top.px', 'font-style.em'). The value is an expression to be evaluated. The resulting non-null value, expressed in the given unit, is assigned to the given style property. If the result of evaluation is null, the corresponding style is removed. | | `[NgSwitch](common/ngswitch)` | The `[[ngSwitch](common/ngswitch)]` directive on a container specifies an expression to match against. The expressions to match are provided by `[ngSwitchCase](common/ngswitchcase)` directives on views within the container.* Every view that matches is rendered. * If there are no matches, a view with the `[ngSwitchDefault](common/ngswitchdefault)` directive is rendered. * Elements within the `[[NgSwitch](common/ngswitch)]` statement but outside of any `[NgSwitchCase](common/ngswitchcase)` or `[ngSwitchDefault](common/ngswitchdefault)` directive are preserved at the location. | | `[NgSwitchCase](common/ngswitchcase)` | Provides a switch case expression to match against an enclosing `[ngSwitch](common/ngswitch)` expression. When the expressions match, the given `[NgSwitchCase](common/ngswitchcase)` template is rendered. If multiple match expressions match the switch expression value, all of them are displayed. | | `[NgSwitchDefault](common/ngswitchdefault)` | Creates a view that is rendered when no `[NgSwitchCase](common/ngswitchcase)` expressions match the `[NgSwitch](common/ngswitch)` expression. This statement should be the final case in an `[NgSwitch](common/ngswitch)`. | | `[NgTemplateOutlet](common/ngtemplateoutlet)` | Inserts an embedded view from a prepared `[TemplateRef](core/templateref)`. | ### Pipes | | | | --- | --- | | `[AsyncPipe](common/asyncpipe)` | Unwraps a value from an asynchronous primitive. | | `[CurrencyPipe](common/currencypipe)` | Transforms a number to a currency string, formatted according to locale rules that determine group sizing and separator, decimal-point character, and other locale-specific configurations. | | `[DatePipe](common/datepipe)` | Formats a date value according to locale rules. | | `[DecimalPipe](common/decimalpipe)` | Formats a value according to digit options and locale rules. Locale determines group sizing and separator, decimal point character, and other locale-specific configurations. | | `[I18nPluralPipe](common/i18npluralpipe)` | Maps a value to a string that pluralizes the value according to locale rules. | | `[I18nSelectPipe](common/i18nselectpipe)` | Generic selector that displays the string that matches the current value. | | `[JsonPipe](common/jsonpipe)` | Converts a value into its JSON-format representation. Useful for debugging. | | `[KeyValuePipe](common/keyvaluepipe)` | Transforms Object or Map into an array of key value pairs. | | `[LowerCasePipe](common/lowercasepipe)` | Transforms text to all lower case. | | `[PercentPipe](common/percentpipe)` | Transforms a number to a percentage string, formatted according to locale rules that determine group sizing and separator, decimal-point character, and other locale-specific configurations. | | `[SlicePipe](common/slicepipe)` | Creates a new `Array` or `String` containing a subset (slice) of the elements. | | `[TitleCasePipe](common/titlecasepipe)` | Transforms text to title case. Capitalizes the first letter of each word and transforms the rest of the word to lower case. Words are delimited by any whitespace character, such as a space, tab, or line-feed character. | | `[UpperCasePipe](common/uppercasepipe)` | Transforms text to all upper case. | ### Types | | | | --- | --- | | `[APP\_BASE\_HREF](common/app_base_href)` | A predefined [DI token](../guide/glossary#di-token) for the base href to be used with the `[PathLocationStrategy](common/pathlocationstrategy)`. The base href is the URL prefix that should be preserved when generating and recognizing URLs. | | `[DATE\_PIPE\_DEFAULT\_OPTIONS](common/date_pipe_default_options)` | DI token that allows to provide default configuration for the `[DatePipe](common/datepipe)` instances in an application. The value is an object which can include the following fields:* `dateFormat`: configures the default date format. If not provided, the `[DatePipe](common/datepipe)` will use the 'mediumDate' as a value. * `timezone`: configures the default timezone. If not provided, the `[DatePipe](common/datepipe)` will use the end-user's local system timezone. | | `[DATE\_PIPE\_DEFAULT\_TIMEZONE](common/date_pipe_default_timezone)` | **Deprecated:** use DATE\_PIPE\_DEFAULT\_OPTIONS token to configure DatePipe Optionally-provided default timezone to use for all instances of `[DatePipe](common/datepipe)` (such as `'+0430'`). If the value isn't provided, the `[DatePipe](common/datepipe)` will use the end-user's local system timezone. | | `[DOCUMENT](common/document)` | A DI Token representing the main rendering context. In a browser this is the DOM Document. | | `[IMAGE\_CONFIG](common/image_config)` | Injection token that configures the image optimized image functionality. | | `[IMAGE\_LOADER](common/image_loader)` | Injection token that configures the image loader function. | | `[ImageConfig](common/imageconfig)` | A configuration object for the NgOptimizedImage directive. Contains:* breakpoints: An array of integer breakpoints used to generate srcsets for responsive images. | | `[ImageLoader](common/imageloader)` | Represents an image loader function. Image loader functions are used by the NgOptimizedImage directive to produce full image URL based on the image name and its width. | | `[LOCATION\_INITIALIZED](common/location_initialized)` | Indicates when a location is initialized. | | `[PRECONNECT\_CHECK\_BLOCKLIST](common/preconnect_check_blocklist)` | Injection token to configure which origins should be excluded from the preconnect checks. It can either be a single string or an array of strings to represent a group of origins, for example: | | `[Time](common/time)` | Represents a time value with hours and minutes. | | `[provideCloudflareLoader](common/providecloudflareloader)` | Function that generates an ImageLoader for [Cloudflare Image Resizing](https://developers.cloudflare.com/images/image-resizing/) and turns it into an Angular provider. Note: Cloudflare has multiple image products - this provider is specifically for Cloudflare Image Resizing; it will not work with Cloudflare Images or Cloudflare Polish. | | `[provideCloudinaryLoader](common/providecloudinaryloader)` | Function that generates an ImageLoader for Cloudinary and turns it into an Angular provider. | | `[provideImageKitLoader](common/provideimagekitloader)` | Function that generates an ImageLoader for ImageKit and turns it into an Angular provider. | | `[provideImgixLoader](common/provideimgixloader)` | Function that generates an ImageLoader for Imgix and turns it into an Angular provider. |
programming_docs